Search

The intelligent monster that you should let eat you - BBC News

lemperbon.blogspot.com
Imagine a monster with a set of words so powerful you have to let it eat you. It might sound fanciful, but we could be on a trajectory to inventing one right now, writes Richard Fisher.

One day, a philosopher was walking down the street, when a monster jumped out.

Despite its terrifying fangs, it was actually more polite and articulate than expected.

“I want to eat you, please,” the monster said.

“Sorry, but I’d prefer not to be your lunch,” the philosopher replied, and moved to keep walking.

“Wait,” said the monster, holding up a clawed finger. “What if I could present you with a sound argument?”

Armed with tenure and a TED talk, the philosophy professor very much doubted that any monster could be so persuasive, but was nonetheless intrigued.

“Proceed,” the philosopher said.

A few minutes later, the monster stomped away, with a dead professor in its belly.

This theoretical beast, called a “utility monster”, is a philosophical thought experiment, originally proposed in the 1970s. To reject its argument and stop it eating you, it asks you to discard a widely held and intuitive principle about how to weigh up right and wrong.

Long regarded as unlikely, if not impossible, the utility monster is usually considered the stuff of fanciful imagination. However, according to some researchers, we could be on a trajectory to building one – but it’ll be made of silicon, rather than flesh and claws.

If they are right, we could soon have some tough choices to make if we want to avoid being eaten.

You may also like:

To understand the monster’s deadly argument, you first have to understand the ethical theory it was proposed to challenge: a prominent school of thought in moral philosophy called utilitarianism.

The word utilitarianism is pretty dry – some have pointed out it sounds like doing the laundry – but it’s actually a profound way of thinking about happiness and the human quest to prioritise a good life for the most amount of people. Roughly sketched, a utilitarian is guided by the principle that well-being can be totted up and that we should aim to maximise it overall in the world. For many of its advocates, utilitarianism offers a simple rule for deciding how to live life, donate to charity and choose careers.

But while the principle of maximising wellbeing and happiness can seem intuitively correct, there are some extreme cases where it feels less so.

One objection to utilitarianism’s most basic framing is that it would seem to permit acts that almost everybody would agree are wrong, like killing or supporting deliberate suffering. For example, in science-fiction writer Ursula Le Guin’s short story The Ones Who Walk Away from Omelas, the reader is introduced to a thriving, joyful city whose prosperous existence totally depends on the extreme misery of a single child that lives in a dungeon. If one were to add up the total happiness of the city, then it would massively outweigh the child’s suffering. But as Le Guin writes, some residents cannot stomach the idea of the sacrificed child, no matter how much overall happiness it creates, and so “walk away” from the city.

There are various other challenges to utilitarianism, but the one that matters for us belongs to our hungry monster.

In the fictional city of Omelas, there is widespread flourishing - but it depends on the suffering of one child (Credit: Getty Images)

In the fictional city of Omelas, there is widespread flourishing - but it depends on the suffering of one child (Credit: Getty Images)

Back on the street, when the utilitarian philosopher asks to be presented with a case to be eaten, the monster explains that it has a special way of experiencing well-being.

“Your idea of happiness is only a mere fraction of what I am capable of feeling,” it says. “I am as different to you, a human, as you are to an ant. If I eat you, it will give me more well-being and satisfaction than all humans who have ever lived.”

The philosopher hesitates while trying to think of a counterargument. “Well, gosh, that’s certainly a valid...” But time’s up, the professor is lunch.

Of course, there are responses to the monster. A philosopher who believed that there are certain moral codes that cannot be broken would have much less trouble. Killing people for food is wrong, she might say, so I don’t care how happy it will make you to eat me.

A utilitarian response to the monster scenario is that they'd never encounter such a creature in the first place – it is so unrealistic as a thought that it can be set aside when making moral decisions in the real world. It’s certainly true that it is difficult to imagine a single being that could experience more well-being than all humans alive and dead – it’s far beyond the imagining of our mammalian brains.

But now there’s a new twist to the thought experiment. Nick Bostrom and Carl Shulman of the University of Oxford have proposed a way that a utility monster could, in principle, come into existence. It might be in the far future, but in laboratories and companies all over the world right now, they believe we may already be taking steps in that direction.

Bostrom is one of the main academic proponents of the idea that we ought to prepare for the sudden arrival of super-intelligent machines, far smarter than the best human minds and capable of raising new ethical dilemmas for humanity. In a recent paper, posted on his website, he and Shulman propose scenarios where one of these digital minds could become a well-being “super-beneficiary” (a term they prefer to “monster” because they believe such minds should be described with non-pejorative language).

Artificial minds may have very different experiences, needs and desires to our own (Credit: Noel Celis/Getty Images)

Artificial minds may have very different experiences, needs and desires to our own (Credit: Noel Celis/Getty Images)

At this point, it’s worth acknowledging that an intelligent digital mind might sound as unrealistic as a theoretical utility monster. If you don’t have a “sci-fi gene”, all this might seem like heady stuff. While it isn’t necessarily a possibility for the near-term, many serious researchers believe that an AI that rapidly matches and exceeds our own intelligence is far from impossible. And when it happens, it will happen fast, creating new ethical and existential dilemmas, which they say are prudent to start thinking about now.

“The machine minds that we are building are becoming increasingly complex,” says Bostrom. “And we can clearly see a trajectory there. Even setting aside issues of super-intelligence or human-level, at least things that start to rival animals of various degrees of sophistication in terms of their cognitive repertoires are already here or on the immediate horizon.”

So, if we accept it’s probable that sophisticated digital minds will emerge at some point in the future, it follows that they could have totally different qualities, needs and mental experiences to our own. And that’s where Bostrom and Shulman began, in front of a whiteboard in Oxford, as part of a larger project to sketch out all the possibilities where digital minds might have psychological and physiological characteristics that mirror or exceed our own – and some that we can’t even imagine.

One thing they identified during this exercise was that digital minds might have the potential to use material resources much more efficiently than humans to achieve happiness. In other words, achieving wellbeing could be easier and less costly for them in terms of energy, therefore they could experience more of it. A digital mind’s perspective of time could also be different, thinking much faster than our brains can, which could mean that it is able to subjectively experience much more happiness within a given year than we ever could. They also need no sleep, so for them it could be happiness all night long too. Humans also cannot copy themselves, whereas in silicon there is the strange possibility of multiple versions of a single digital being that feels a huge amount of well-being in total.

These aren’t the only routes to super-beneficiary status either. There’s also synthetic life’s lack of evolutionary adaptations, which place limits on enjoyment for you and me. “There is reason to think that engineered minds could enjoy much greater durations and intensity of pleasure,” write Bostrom and Shulman. By contrast, in humans “culinary pleasures are regulated by hunger, sexual ones by libido”, and our enjoyment can eventually be lessened by boredom or normalisation. Digital minds would have no such barriers.

An essential point is that even the most contented, joyful human who ever lived might not sit at the pinnacle of possible well-being – synthetic life could exceed it. And if so, that could create a few dilemmas for us when these machines come along.

It might even be an argument for avoiding building them in the first place. If one subscribes to the view that they should be afforded rights (which I am aware, again requires you to switch on your sci-fi gene), they might have a case for being the sole beneficiaries of resources and energy when those were scarce, because the quality and quantity of their well-being would so massively outweigh ours. If our demise meant their success, then by the basic utilitarian logic that we should maximise well-being in the world, they’d have an argument for metaphorically eating us.

Of course, only the most uncharitable interpretations of utilitarianism say we are obliged to detrimentally sacrifice ourselves for the sake of others’ happiness. That would make us a so-called “happiness pump”, another philosophical thought experiment captured by the character Doug Forcett in the TV show The Good Place. Forcett spends his whole life doing everything he can to please others, such as letting a teenager relentlessly bully him, making himself thoroughly miserable in the process.

Our happiness is constrained by our evolutionary past - not necessarily so for digital minds (Credit: Alamy)

Our happiness is constrained by our evolutionary past - not necessarily so for digital minds (Credit: Alamy)

Still, it could get ethically complicated, to say the least. It could require us to say that humans have greater privileges than any synthetic mind, no matter how conscious, intelligent and advanced it is. Bostrom and Shulman take quite a hard line towards that attitude, drawing parallels with historical racial supremacy or animal cruelty, both of which are now generally abhorred as morally wrong. Personally, those parallels stretch my own credulity beyond its limits, but perhaps that’s just my mammalian brain.

When I spoke about these ideas with the philosopher Thomas Metzinger of Johannes Gutenberg University of Mainz in Germany, he raised a related point I hadn’t considered. What if we instead accidentally created conscious digital minds that are incredibly efficient at experiencing suffering? What if we created an explosion of negative well-being among synthetic beings, of a depth never before seen in history?

“It may be very different from biological suffering, because they have different sensors, receptors, different body representation, internal data formats,” Metzinger explains. But if you accept that consciousness might not be unique to biological cells, then subjective experiences akin to pain could also arise, no matter how alien they are to us. “If we have a non-human person that is capable of high-level symbolic reasoning and has linguistic capacities, and can actually state its own dignity, that would be something we probably couldn't ignore anymore.”

Metzinger acknowledges that such issues are not exactly high on the priority list of policy-makers and regulators right now (and he should know, because he’s been advising the European Commission recently). But the subjective wellbeing of intelligent, digital minds should not be disregarded, he argues. “A large number of people who will actually be ethically responsible for creating machine consciousness may already be alive today. It creates a historical responsibility,” he says.

So, as we edge closer to the possibility of intelligent synthetic minds, we will need to weigh up how we design them, what we do and don’t allow them to do, and crucially, how to ensure their needs are aligned with our own. Will it be morally permissible to engineer these minds to serve us, or to “feel” certain ways? How much of our resources should we share with them? And should they be afforded the same non-discriminatory moral status as human beings?

Bostrom tells me the broader goal is to identify “paths that will enable digital minds and biological minds to coexist, in a mutually beneficial way where all of these different forms can flourish and thrive”. The arrival of digital minds doesn’t have to be catastrophic, defined by conflict, he argues. “If we're going to introduce these new denizens into the world, what would be the ethical and political framework within which one could have a happy coexistence?”

After all, if we’re thinking really far-out in the future, eventually a digital mind might emerge that is more ethically enlightened than we are. “A really decisive move could be if a machine begins to impose moral obligations onto itself, without us forcing it to,” says Metzinger. That might start it on a path towards a different, deeper understanding of what it means to be good. And if so, our hypothetical monster might not choose to eat moral philosophers, but intellectually it could easily have them for lunch.

--

Join one million Future fans by liking us on Facebook, or follow us on Twitter or Instagram.

If you liked this story, sign up for the weekly bbc.com features newsletter, called “The Essential List”. A handpicked selection of stories from BBC Future, Culture, Worklife, and Travel, delivered to your inbox every Friday.

Let's block ads! (Why?)



"eat" - Google News
November 11, 2020 at 03:00PM
https://ift.tt/3lsuYPX

The intelligent monster that you should let eat you - BBC News
"eat" - Google News
https://ift.tt/33WjFpI
https://ift.tt/2VWmZ3q

Bagikan Berita Ini

0 Response to "The intelligent monster that you should let eat you - BBC News"

Post a Comment

Powered by Blogger.