Technology, AI and ethics.

Who’s Afraid of AI?

 

Who’s Afraid of AI?

We don’t have to fear super-intelligent machines taking over control. But we should watch out carefully for humans misusing AI to manipulate us.

by Thomas Ramge

The learning curve for machines appears to be sharply steeper than it is for human beings. Apocalypticists like the Oxford philosopher Nick Bostrom fear the seizure of power by superintelligent machines and the end of humanity. Extreme positions make good headlines. For those who advocate them, extreme positions are good business in the market for our attention. No one knows what computers will be capable of in a few hundred years. But hypothetical bluster about the end of the human species through superintelligence might have an unwanted side effect. It distracts us from the very real dangers that rapid development of weak AI entails here and today. The most important dangers can be grouped under three headings: monopolization of data, manipulation of individuals, and misuse by governments.

Datamonopolies

Ever since Karl Marx, we have known that capitalism tends toward market concentration. In the industrial age, economies of scale helped large companies become ever larger. Henry Ford showed the way. The more Model Ts he produced, the less expensively he could sell an individual car. The lower the price and the higher the quality, the more quickly his market share rose. The successful companies in the age of mass production gladly bought out their competitors in order to reap additional size advantages through economies of scale and at the same time to reduce competition. In the twentieth century, however, governments had an effective tool—antitrust law—to prevent monopolies (when they wanted to do so). In the age of knowledge and information—that is, since the digitalization boom of the 1990s—network effects have come more and more into play. The more customers a digital service has, the more network effects increase the service’s usefulness. The operators of digital platforms in particular have succeeded in conquering market shares that the railroad barons, automobile manufacturers, and producers of instant pizzas could only dream about. In the last twenty years, corporate superstars Microsoft, Apple, Amazon, Google, and Facebook have created oligopoly structures, at times even quasi-monopolies, in the digital markets of the Western world. In Russia, Yandex dominates most digital markets. In China, Tencent, Baidu, and Alibaba have risen to become de facto monopolies with government support. US and European antitrust law is proving helpless against this. Already highly problematic today, this state of affairs will become extremely hazardous for competition as machines that learn from feedback data contribute more and more heavily to value creation. Artificial intelligence turbocharges monopoly formation because the more often products and services with built-in AI are used, the more market share they gain, and the more insurmountable their lead over their competitors becomes. In a sense, innovation is built into the product or the business process, so innovative newcomers will only stand a chance against the top dogs of the AI-driven economy in exceptional cases.

Without competition, no market economy can be successful in the long term. It eliminates itself. For this reason, Viktor Mayer-Schönberger, Oxford professor of Internet governance and regulation, and I have called for the introduction of a progressive data-sharing mandate for the goliaths of the data economy in our book Reinventing Capitalism in the Age of Big Data. If digital enterprises exceeded a certain market share, they would have to share some of their feedback data with their competitors—while of course adhering to privacy regulations and thus mostly in anonymized form. Data are the raw material of artificial intelligence. Only when we ensure broad access to this raw material will we make competition possible between companies and ensure the long-term diversity of AI systems. This is doubly important because competition and diversity are prerequisites for confronting the second major danger in the age of weak AI: the manipulation or taking improper advantage of individuals through the use of artificially intelligent systems.

In Whose Interest Does the AI Agent Act?

Within a few years, we will delegate many of our daily decisions to assistants that learn from data. These systems will order toilet paper and wine exactly when we need them, because they will know how much of those things we consume. AI will organize our business trips and offer to book everything for us with a single click after we have given the itinerary a quick inspection. For lonely hearts, AI will suggest partners who are much more likely to be interesting to the lovelorn than the suggestions generated by current dating sites. But who can guarantee that the bot is really looking for the best offer? Maybe one of the oddballs on the dating site bought a premium plan and thus enjoys an algorithmic advantage. And is the self-driving taxi driving us past an electronics store because it knows that we are in the market for 3-D glasses? Maybe an ad for 3-D glasses pops up on the electronic billboard at just the moment we drive by, giving us enough time to tell the autopilot, “Stop at the electronics store for a moment!” Or would a health app raise a false alarm in order to recommend a medication?

To put it more succinctly and in somewhat more abstract terms, these scenarios raise the question: In whose interests is the virtual assistant acting? Today, most bots and digital assistants are salespeople in disguise. Alexa is built and run by Amazon, not by a startup looking on our behalf for the best deal in all online stores. This is admittedly legitimate as long as it is transparent and we are not secretly being taken advantage of. But in a world where there are many assistants, we will quickly lose track of who might be out to fool us. We will not know exactly who is advising us when we ask our smartphone or the intelligent speaker on our nightstand for advice. Often enough, we will not even care because it is so convenient, and in many cases we will even pay extra for nanny tech that pampers us.

Everyone individually will have to learn where they want to draw the line on machine- driven infantilization. We must primarily bear responsibility for our own technological self-disenfranchisement. The state and the market will need to ensure, however, that customers have access to a large selection of bots that adhere to the principle of neutrality, much as provider-independent price-comparison engines do today. There will be a need for a seal of approval—and unfairly manipulative or even fraudulent agents will have to be shut down by the government. That admittedly requires a state that is governed by the rule of law and does not itself use artificial intelligence to deceive its citizens.

In whose interests is the virtual assistant acting? Today, most bots and digital assistants are salespeople in disguise. Alexa is built and run by Amazon, not by a startup looking on our behalf for the best deal in all online stores.

The Digital Dictatorship

At the interface between the state and its citizens lurks the third and perhaps greatest danger: government misuse of weak AI for mass manipulation, surveillance, and oppression. This is no science-fiction scenario like a superintelligent computer seizing world domination and subjugating humanity. The technical possibilities for the perfect surveillance state available today read like a medley of all the political dystopian novels since George Orwell’s 1984.

The state combines surveillance cameras with automatic facial recognition and knows who crosses the street against a red light. Thanks to an autonomous drone, the surveillance camera can directly pursue the jaywalker. Voice recognition in electronic eavesdropping not only identifies who is speaking, but also determines the speaker’s emotional state. AI can discern sexual preference from photos with a high success rate. Automated text analysis of social media posts and online chats can identify in real time where subversive ideas are being thought or spoken. GPS and health data from smartphones, in-app payments and credit history, digitized personnel files, and real-time criminal records provide all the information needed to calculate a citizen’s trustworthiness—and of course serve up a softball for secret police to do their work efficiently. The all-powerful state naturally has social bots to disseminate personalized political messages as well.

Digital tools are not required for tyranny to exist. All the unjust regimes in world history have convincingly proved that. But in the age of intelligent machines, the question of oppression is posed with new urgency. Tech-savvy regimes are about reinventing dictatorship. In AI-powered autocracy, oppression could creep in more subtly than soldiers or police in uniform. Data show the way for the state to nudge citizens into a desired form of behavior.

A New Machine Ethics

For the time being, we don’t need to be afraid of artificial intelligence running amok, but rather of malicious people who misuse it. Recent years have seen much discussion of a new machine ethics and of the question as to whether—and if so, how—ethical behavior can be programmed into machines. These debates have often been pegged to artificial dilemmas—a self-driving car is advancing toward a mother pushing a baby in a stroller, say, and a group of five senior citizens. It has to decide whom it should run over. Mother and baby, who together are expected to live another 150 years, or the five senior citizens with a collective life expectancy of 50 years? Thought experiments like this are necessary. The dignity of humanity is inviolable. In war, a general is allowed to make the trade-off of sacrificing five soldiers if he can thereby save ten. In theory, no one is permitted to do that in civilian life. In practice, a motorist driving at excessive speed with no chance to brake does make such a trade-off when choosing to drive into a group of people rather than into a concrete pillar.

The automation of decisions is, of course, an ethical challenge in many contexts, but at the same time, it is a moral imperative. If we can cut the number of traffic deaths in half within ten years with self-driving cars, we have to do it. If we can save the lives of many cancer patients by using machine pattern recognition on cancer cells, we cannot permit progress to be slowed by a doctors’ lobby that is more worried about preserving its members’ co-pays. And if AI systems in South America teach math to impoverished children, we cannot complain that it would be nicer if they had more human math teachers.

Digital tools are not required for tyranny to exist. All the unjust regimes in world history have convincingly proved that. But in the age of intelligent machines, the question of oppression is posed with new urgency. Tech-savvy regimes are about reinventing dictatorship.

Artificial intelligence changes the fundamental relationship between human and machine less than some AI developers would like us to think. Joseph Weizenbaum, the German American inventor of the ELIZA chat program, wrote the worldwide bestseller Computer Power and Human Reason: From Judgment to Calculation in 1976. The book was a rallying cry against the mechanistic faith in machines that was current at that time. It deserves to be republished, at a time when belief in the technological predestination of humanity is again coming into fashion in Silicon Valley.

We can delegate decision-making to machines in many individual fields. AI systems that are skillfully programmed and fed the proper data are useful experts within narrow specialties. But they lack the ability to see the big picture. The important decisions, including the decision about how much machine assistance is appropriate, remain human ones. Or, formulated more generally: Artificial intelligence cannot relieve us of the burden to think.

The history of humanity is the sum of human decisions. We decide normatively what we want. This will remain the case. We do not even have to reinvent the positive worldview that is required for the next step in the development of the machine-aided information age: “Very simply, it’s a return to humanistic values,” says the New York venture capitalist, author, and TED-speaker Albert Wenger. These values can in his view be expressed by a formula: “The ability to create knowledge is what makes us human beings unique. Knowledge arises through a critical process. Everyone can and should take part in this process.” The digital revolution allows us to put this humanist ideal into practice for the first time in history—by employing artificial intelligence intelligently and for the good of humanity.

 

 

Copyright © 2019 The Experiment

This essay is an excerpt from Thomas Ramge’s new book: Who’s Afraid of AI? Fear and Promise in the Age of Thinking Machines. The Experiment Publishing, April 16, 2019. 

Share Post
Thomas Ramge

Thomas Ramge is one of the best known European experts on AI, the data-economy and GDPR. His work connects the dots between data-driven technology, its impact on business and management and its consequences for society and policy-making. As a keynote speaker, he is praised for his thought-after global perspective on all things digital. Thomas has written more than a dozen books, which have been translated into more than 20 languages. "Reinventing Capitalism in the Age of Big Data" (co-authored with Oxford professor Viktor Mayer-Schönberger) and his newest book "Who is afraid of AI?" have been widely discussed worldwide and featured i.e. in The New York Times Book Review, The Harvard Business Review, and Foreign Affairs. As a writer Thomas has been honored with multiple journalism and book awards, including the Axiom Business Book Award 2019 (gold medal, economics), the getAbstract International Book Award 2018, Best Business Book of The Year 2018 on Technology and Innovation (by strategy+business), the Herbert Quandt Media Prize, the German Business Book Award and the ADC Award. Thomas is the technology correspondent for the business magazine brand eins and writes for The Economist. He also teaches at the AI Business School Zurich and serves as Chief Explaining Officer of the German-American analytics company QuantCo; a Harvard spin-off. Thomas Ramge lives in Berlin with his wife and son.

Show Comments