Technology, AI and ethics.

We lack a unified understanding of what ‘safe’ and ‘beneficial’ AI means

 

We lack a unified understanding of what ‘safe’ and ‘beneficial’ AI means

By Paul Ostwald

For a time-traveller from the past, our age must look like a strange mix of tech utopia and dystopia. On the one hand, smart robots assist surgeons in life-saving operations. On the other, biased algorithms deny citizens across the world credits that they desperately need. As Cathy O’Neill concludes in Weapons of Math Destruction, these algorithms more often than we’d hope “punish the poor and the oppressed in our society”.

While that is already a problem, the future that we’re vigorously inventing might only accelerate the risk. It’s unclear how far we’re still away from superhuman Artificial Intelligence, which might surpass our brain capacity. For Stuart Russell, who has been teaching on AI at UC Berkeley for decades, it’s 80 years away. A survey among experts found that Russell’s guess is quite conservative: On average, scholars think there’s a 50-50 chance that we’ll develop AI that can compete with humans within the next 45 years.

The list of fears is growing, nourished by a flood of dystopian visions. In Human Compatible, Russell embraces the argument that AI might be more of a threat to the human race than aliens. Although his position is certainly more nuanced, he echoes a narrative that is widespread in the AI community and beyond: the fear that robots might outgrow and replace us. Others strike a more optimistic note, like Aaron Bastani in Fully Automated Luxury Communism. As the title suggests, he believes that automation might catapult humanity into a paradise-like age of human flourishing. Both sides of the argument are aware that both paths are possible, and most conclude that the decisive question is the framework we develop for AI.

A survey among experts found that Russell’s guess is quite conservative: On average, scholars think there’s a 50-50 chance that we’ll develop AI that can compete with humans within the next 45 years.

Whether algorithms end up working for the benefit of humanity will thus depend on a system of oversight that we do not yet fully possess. As Russell quotes, “A bunch of ‘dudes chugging Red Bull’ (Max Tegmark) at a software company can unleash a product or an upgrade that affects literally billions of people with no third-party oversight whatsoever”. Without control, all visions of AI’s future will have little impact. How could such a structure look?

AI is predominantly developed by privately-owned companies. The most significant players are Google, Facebook, Amazon, Microsoft, IBM, Baidu, Tencent, and Alibaba. Bar the last two, they are all members of the “Partnership on AI”, an industry group that pledges to use AI safely and follow a set of rules. Most major universities now have oversight projects, such as Princeton’s Web Transparency and Accountability Project. So, we already have formal institutions.

Whether algorithms end up working for the benefit of humanity will thus depend on a system of oversight that we do not yet fully possess.

The key concern is that we lack a unified understanding of what ‘safe’ and ‘beneficial’ AI means. Russell’s suggestion to focus on provably beneficial AI systems is a very sensible one. The idea behind it is simple: AI optimizes operations towards a requested result, which we can (still) determine. Figuring out which objectives that should be, requires philosophers, not necessarily engineers. That’s why both dystopian visions, such as O’Neill’s Weapons of Math Destruction, and utopian ones such as Bastani’s Fully Automated Luxury Communism, are so important. They allow us to develop the objectives that algorithms should work for.

Thus, a crucial part of developing the necessary grammar of AI will lie in developing new utopias and dystopias, learning from them, and strengthening the oversight of AI. These aren’t academic debates. They happen in newspaper columns, discussions on Twitter, and in Hollywood movies. In a way, this broad discussion thus democratizes AI. The crucial insight is that not every problem, not even a technological one like AI, admits of a technological answer. “The solution to this problem”, as Russell rightly concludes, “seems to be cultural.”

Share Post
Paul Ostwald

Paul Ostwald studies International Relations at Oxford University. He freelances for German papers FAZ, taz, and Handelsblatt focusing on 20th century intellectual history.

Show Comments