Technology, AI and ethics.

The Return of the Oracle as AI-Black Box

 

The Return of the Oracle as AI-Black Box

by Roberto Simanowski

It happened in Krasnoyarsk Siberia. My credit card refused to yield any money. Frozen. It wasn’t that I had too little in my account or had exceeded my monthly limit. I had forgotten to tell my bank I was taking a trip, and an algorithm had suspected a hack by criminal Russians and blocked my card. Actually I should have been grateful to the algorithm. But without any money in icy Siberia, I wasn’t.

This is a well-known problem that recurs in various forms. People sometimes can’t get any credit because an algorithm deems them unworthy, and no one knows exactly how it reached that opinion. There are many factors that go into its judgement: from income, place of residence and credit history to recent postings on Facebook and Twitter that suggest conflict with the boss, imminent joblessness and, with it, an inability to pay back debts. In that case, money is withheld not because the system knows too little, but because it knows too much.
And whereas I could have informed the system beforehand that I was headed to Siberia, and I was able to get my card unblocked after the fact, there is no point of appeal when you’re denied credit. The EU wants to change this situation and has recently issued a catalogue of ethical principles for the development of artificial intelligence. After all, the logic runs, everyone should have the right to ask a bank employee about reasons behind a rejection. But can anyone truly name any such reasons anymore? Is it not true that even computer programmers no longer understand their hyper-complex programs?

Such concerns are by no means new. Twenty years ago, analysts already warned about the impenetrability of artificial intelligence, which demands that we have blind faith in its decisions. [1] Today the name for this fear – and the title of a book by Frank Pasqualle – is Black Box Society. [2] We may know all about the inputs and outputs of this society but have no idea of how the former become the latter. This black box is unlike the trunk of a conventional car we can open up to see what’s wrong. Its modus operandi is mysterious, all the more so the greater its own complexity.

Black Box Talk Box

The black box society is an inevitable consequence of digitalization affecting and often afflicting us – and not only regarding money. This companion is already present in many people’s living rooms, where it goes by the name Alexa. Do we really know how Alexa produces its answers to our questions? Alexa’s information always seems to drop from heavens, unrelated to anything else. Suddenly, it seems, knowledge has no origins.

That isn’t entirely true. After all, if we wanted to, we could find out the sources from which Alexa gets its knowledge. But who does this? Did we invent Alexa, or Siri or Cortana, to mistrust them? On the contrary, do we not want to trust them as blindly as we do our satnavs?

The new paradigm is the outgrowth of a media shift. The transition from written to oral queries replaces the AI nanny of Google, which at least shows us alternatives, with the AI dictator of Amazon, which doesn’t mention them at all. In the results yielded by a Google search of the best restaurants nearby, the establishment ranked number 7 still has the chance to win out, if it attracts our attention with an interesting name, a few cool photos or some other clever detail. With Alexa, seventh place remains invisible since, already irritated by having to ask, I invariably settle for the first best result, regardless of how good it is.

Google has long aspired to become such an AI dictator, albeit a benevolent one. Google CEO Eric Schmidt freely admitted that in 2005. He said: “When you use Google, do you get more than one answer? Of course you do. Well, that’s a bug. We have more bugs per second in the world. We should be able to give you the right answer just once. We should know what you meant.” [3] We should know you better than you know yourself – that’s the whole point here.

Did we invent Alexa, or Siri or Cortana, to mistrust them? On the contrary, do we not want to trust them as blindly as we do our satnavs?

AI Oracle

In his book Homo Deus: A Brief History of Tomorrow, Yuval Noah Harari imagines the future of dating as a conversation between bots: “My Cortana may be approached by the Cortana of a potential lover, and the two will compare notes to decide whether it’s a good match – completely unbeknown to their human owners.” [4] Just as human beings outsourced their memories to the written word and began to rely more on archived material than their own recollection, in the future they will trust that AI does a better job of processing important data than they ever will.

Algorithms won’t enslave us, Harari postulates. Instead they will prove so good at making decisions that we would be crazy not to take their advice. He adds, “Once Google, Facebook and other algorithms become all-knowing oracles, they may well evolve into agents and ultimately into sovereigns.” We will not know why Cortana pairs us with this lover and not that one, but anyone who entrusts his life to AI by allowing it to drive a car should by all rights trust it to steer him through life. There is no doubting that algorithms can process much more data much more reliably than we can. And avoiding automobile and finding the perfect match are essentially matters of flawless data processing.

One aspect of flawless data processing is clear output. For this reason, although oracle 2.0 doesn’t avoid the opaqueness of its sources, it utters commands and instructions that are impossible to misunderstand. By contrast, when the oracle at Delphi told the Greeks, who were seeking protection from the Persians, “Though all else shall be taken, Zeus, the all-seeing, grants that the wooden wall only shall not fail,” they were left to hit upon the idea themselves that what was meant was not the walls of Athens but the ships of a flotilla they should build. When the witches told Macbeth that “no man of woman born” could threaten him, he needed to know that Macduff was delivered by Caesarian section.

With a classic oracle, you never knew which end was up. Even if you wanted to, you couldn’t blindly follow the oracle’s pronouncements. They required the recipient to engage in hermeneutics, to apply strange statements to his or her own concrete situation. The AI oracle Harari invokes overcomes individuals’ involvement in their own affairs. It doesn’t admonish “Know thyself,” as was written at the entrance to the oracle of Delphi. It does not encourage us to decode it. It simply says: Do as I say and don’t ask questions!

The AI oracle doesn’t create order in society from its margins. It is order, to be continually consulted and obeyed.

Digital Pantheism

The AI oracle’s immediacy and lack of context accords well with its new location – or better still locationlessness. Prophecies no longer come from the mouths of witches in the woods or a priestess on the cliffs of Delphi. They are pronounced constantly from all the computers that accompany and surround us – and, in the case of Alexa, inhabit our living rooms. The AI oracle doesn’t create order in society from its margins. It is order, to be continually consulted and obeyed.

Insofar as we empower algorithms to work out everyday matters amongst themselves, the AI oracle does more than just speak. It acts for us. This is the already case with the playlists put together by Spotify and YouTube based on what we’ve previously heard and watched. It is one of the promises immanent to the Internet of Things and Smart City. At some point we will trust our refrigerators to order the right foods, having compared our preferences with our health data and, if necessary, consulted a nutritionist. At some point, even a refrigerator is nothing less than an oracle.

The AI oracle occurs as a double paradox. It puts an end to free will in the interests of self-optimization. And it re-enchants the world – paradoxically by utilizing its absolute analyzability, for a world of unambiguous instructions about how to live with nary a verifiable basis is an enchanted one. It’s as though God were speaking to us through his new priests, the algorithms. Harari, too, writes of a “cosmic data-processing system” we serve with all the information we provide. It will be omnipresent and all-powerful. Human beings – this is how Harari’s history of tomorrow ends – are predestined to be dissolved within it. A digital pantheism, we might say, if that were not a whitewash (or “mathwash”) of all the bias and prejudice that determines what we input into the black box.

  • [1]

    Blay Whitby, Reflections on Artificial Intelligence. The Legal, Moral and Ethical Dimensions, Exeter: Intellect Books 1996. p. 87.

  • [2]

    Frank Pasqualle: Black Box Society. The Secret Algorithms That Control Money and Information, Cambridge, MA: Harvard University Press 2015, p. 3.

  • [3]

    Eric Schmidt in interview with Charlie Rose on 3 June 2005 (https://charlierose.com/videos/17574). See Gregory Ferenstein: “An Old Eric Schmidt Interview Reveals Google’s End-Game For Search And Competition”, by TechCrunch, 4 January 2013 (https://techcrunch.com/2013/01/04/an-old-eric-schmidt-interview-reveals-googles-end-game-for-search-and-competition)

  • [4]

    Yuval Noah Harari: Homo Deus. A Brief History of Tomorrow, London: Harvill Secker 2015, p. 343

Share Post
Roberto Simanowski

Roberto Simanowski is a scholar of media and cultural studies and holds a Ph.D. in literary studies and a Venia Legendi in media studies. He is the founder and editor of the journal on digital culture and aesthetics dichtung-digital.org (1999-2014) and the author of several books on digital culture and politics, including Digital Art and Meaning (University of Minnesota Press 2011) Data Love and Facebook Society (both Columbia University Press 2016 and 2018), Digital Humanities and Digital Media: Conversations on Politics, Culture, Aesthetics and Literacy (Open Humanities Press 2016) as well as Waste: A New Media Primer and The Death Algorithm and Other Digital Dilemmas (both MIT Press 2018). Roberto worked as professor of German Studies at Brown University and as professor of Digital Media Studies and Digital Humanities at the University of Basel and at City University of Hong Kong. He lives as media consultant, op-ed contributor (to Neue Zürcher Zeitung, Die Zeit, Salon, Deutschlandfunk Kultur, among others) and author in Berlin and Rio de Janeiro.

Show Comments