Technology, AI and ethics.

Should we stop worrying and learn to love AI?


Should we stop worrying and learn to love AI?

by Fabian Geier


Creators and their Creations

In July 1955 Joseph Rotblat and Albert Einstein signed a manifesto calling for world-wide nuclear disarmament. Rotblat had been working on the Manhattan Project, which Einstein helped inaugurate when he urged F.D. Roosevelt to accelerate research into nuclear weaponry. Scientists who had created a device and handed it to less brilliant people to use, were now among the first to warn about it.

AI is not the Bomb

When leading researchers in Artificial Intelligence, like Stuart Russell or Max Tegmark, express their warnings today, it seems hard to put these into the same category. AI is not a killing machine. It is not even a machine at all, but a technology: a hand full of mathematical principles and engineering best practices that allow us to build artificial puppies, self-driving cabs, influential ad campaigns – and better killing machines. And while even the technology behind the bomb gave us some goodies – carbon-free energy, or space probes that operate beyond the solar system – the potential benefits of AI seem incomparably larger: from smart allocation of resources to individually tailored medical treatment, from voice interfaces to alleviating us from mind-numbing work and daily chores. AI is far more constructive than the bomb. It brings order to chaos (more precisely: it finds order in chaos), while nuclear fission and fusion were just clever ways to create a bigger bang. This is why the crown of doomsday scenarios still belongs with the bomb. AI will not take over the world and make it uninhabitable – for now. Looking forward, it is not Skynet we need to be afraid of, but narrow AI, advanced pattern recognition technology, in the wrong, all too human hands.

Let me put it this way: Someday when a machine passes the Turing Test, this may not just be because the machine has become so human-like, but also because humans have become more machine-like.

It all Depends. But on what?

So it is never the technology we need to be afraid of – only the hand that wields it? Guns (or bombs) don’t kill people. AI does not enslave people. It all depends on the purpose we use them for, right? Maybe. But the gun argument, trivially true as it may be, was never quite getting it. Of course it is people who kill people. But people kill people much more effectively with guns, which is why all countries regulate them, at least beyond a certain size. And even a legal and unused gun in your pocket changes your options, and therefore your mind. Having a gun makes every situation one in which you could draw it. Tools ready at hand are not just means to unchanging goals that we import from completely isolated places of our minds. Our hammers, our guns, our phones – are an “extension of our self”, as Marshall McLuhan put it: they affect our attention, emotions and intentions. This is why messaging apps change our communication habits and why blocking access to suicide bridges brings down the number of suicides, even though in principle suicidals are perfectly free to find another way. We may be free or not, but it seems that people often emphasize individual freedom when they don’t want to think about psychology. But for ethics of artificial intelligence, we need to think about both. We must consider not only whether autonomous cars should rather run over three adults or two children, and how to transfer working classes into non-working classes without civil unrest. We must also consider the fact that when our tools are AI-powered, they have effects on us as users, not just as potential victims.

Innocence and Responsibility

But first things first. After all, AI does kill people. This is another way in which AI is different from the bomb: With AI, machines have lost their innocence. The gun, even the bomb do exactly what they are built to do. They have exactly one predefined way of affecting their environment. A person then chooses when and how to exercise that option that now exists. AI, however, finds by itself one particular way among many to reach a predefined goal. This is what makes AI useful, and responsive like no machine before. It makes it also liable to errors while functioning exactly as designed – for instance classifying humans as apes or confusing snowy landscapes with wolves. This makes AI effectively a black box: You end up on a no-fly-list, or bad-customer list (our version of the Chinese social credit system) for reasons that may remain unclear even to the machine’s human operators. And the fact that AI has been designed to replace these operators, makes it all the harder to get hold of anyone who has the power, time and ability to set things right.

But AI driven devices, as long they are driven by narrow AI, are still machines. Narrow AI is an autonomous means to a fixed purpose that is not set by the AI itself. This external purpose may be nefarious or completely stupid, regardless of how superhumanly clever the machine goes about solving it (what Nick Bostrom calls the “orthogonality thesis”). This creates the peculiar set of ethical problems distinctive of AI. AI acts autonomously within certain parameters but has (for now) no control over those parameters themselves or their context. This is why it needs safeguards. The outlines of such safeguards have been subject to a long debate. But the basic principles are often human oversight and transparency: Humans (and human values) have to be in the loop of any important decision-making, and humans need to be able to trace, if necessary, how these decisions have come about.

But here is a problem: There is and can be no clear line beyond which to apply these principles, because the very thing that makes AI useful is what creates the problems in the first place. If we don’t leave some autonomy to the machine, it cannot help us. Safeguarding it will always be a bargain between usability and possible harm – while we are likely to gradually getting used to passing on more and more control to the machine.

Towards an AI Society

Tools ready at hand are not just means to unchanging goals that we import from completely isolated places of our minds. Our hammers, our guns, our phones are all an “extension of our self”.

Problems of benefit and harm are very hard to solve for many use cases of AI. Conceptually, however, these problems are simple. They are also very technological themselves: We have a clear goal – viz. preventing harm to human beings – and we set out to find the best means to achieving this goal, through either design or regulation. We must make sure people keep those goals in mind when we create products and policy, as well as when they work towards international regulations for self-driving cars or autonomous weapons. If humanity is alive and well ten years later, we have succeeded.

But as I have pointed out earlier, there are more complicated problems than these. If even a gun or a bomb can influence our mindset, how much more can artificial intelligence? Just like social media has transformed our social lives and self-images, AI will have a huge cultural and psychological impact. We live with it and through it and both we and our tools are responding to each other – two neural networks reacting to one another’s output. AI affects culture in two ways: It spreads the cultural categories of its creators, who have specified its goals and parameters, and it creates a culture of its own through the patterns it develops. These effects are hard to measure in terms of benefit and harm. Let me put it this way: When someday a machine passes the Turing Test, this may be not just because the machine has become so human-like, but also because humans have become more like the machine. AI shapes us in its image, while we shape it in ours – or in what we think is ours. Or even more precisely: what those who set the parameters think is ours. When we are asked to phrase things differently so a chat-bot can process them, when self-help apps give us step-by-step recipes for increasing our pleasures and handling our desires (two words that don’t even translate into my native language), and when we are formed as much by the emotional categories that Amazon and Facebook file us under as we have helped producing those categories – then we are making just the first steps into a society where AI and NI are not just coexisting but permeating each other. Artificial and natural intelligence are not just using each other as tools. They become each others’ parts and co-creators, long before machine-brain interfaces start erasing the border between them. AI is a cultural force. This is why there is so much more we need to talk about than just possible accidents and abuse.

Share Post
Fabian Geier

Dr. Fabian Geier teaches, learns and works in ethics, philosophy and CODE. His recent academic works are in phenomenology, ethics of technology, and contemporary moral psychology. He has lived and worked on three continents for twelve universities.

Show Comments