by Paul Ostwald
Google CAPTCHAs have become a quintessential ritual of the 21st century. Since you’re reading this online, you’ve probably spent a good share of your lifetime telling street lights from the traffic lights on tiny, blurry images. If you manage to click the right pictures, Google confirms that you’re not a robot. By doing so, you’re training Google’s Artificial Intelligence to learn how to tell different objects apart in pictures. But it’s no longer just traffic and road lights in Silicon Valley.
Are machines morally responsible, or does all reasoning lead back to human decision makers? Should we blame the synthesis of artificial intelligence and war technologies, or Sundar Pichai, Google’s long-term CEO?
In May 2018, The New York Times reported that Google sold its artificial intelligence unit to the Pentagon as part of “Project Maven”. Its technology is used to identify objects – not humans, as Google ascertains – on drone images. Google’s CAPTCHAs improved a technology that enhanced the Pentagon’s ability to process drone images. By proving your humanity, you contributed to the improvement of what many consider an inhumane war technology.
The military contract sparked internal controversy. Approximately 4,000 of Google’s own employees condemned the Pentagon-contract in an open letter and demanded its immediate termination. Their plea was heard. Google did not renew the contract. The 4,000 Google employees were named Arms Control Persons of the Year 2018 by the Arms Control Association, a US-based think tank.
This dualism has been the core of the techno-ethics, the field linking technological advancement and ethics. It produces and reproduces the same questions over the centuries. Are machines morally responsible, or does all reasoning lead back to human decision makers? Should we blame the synthesis of artificial intelligence and war technologies, or Sundar Pichai, Google’s long-term CEO?
How much can we allow machines to replace humans? These questions sketch technology as a force that needs to be contained for the good of mankind. But that’s only half the story. Enticing questions lie ahead if we leave aside doomsday scenarios. Will new inventions be able to solve the ills of mankind and the planet itself? Can technology even enhance our ability to act morally?
We could begin this story in Ancient Greece, or in Manchester, at the height of industrialization. But the enthusiasm for the seemingly infinite opportunities that technological progress held in store for mankind probably peaked in the early 20th century. As Yale’s Robert Post tells it, Henry Ford was at the forefront of a generation driven by an “obsession to fulfill new technological potentials simply because those potentials existed”. Invention for the sake of invention. Thanks to plastic, windscreen wipers, tea bags, vacuum cleaners and air conditioning were all invented between 1900 and 1910. But there were also technological achievements that changed the course of mankind for the better. In 1903, the Dutch doctor Willem Einthoven created the first practical electrocardiogram that still forms the basis of today’s models. Some twenty-one years later, he was awarded the Nobel Prize in Medicine for adding an essential to the medical toolkit.
The moral dilemmas of techno-enthusiasm only became visible when the First World War revealed the lethal force of scientific innovation. On 15th September 1916, D1 became the first tank to ever engage in battle. Among the other novelties were flamethrowers. Both tanks and flamethrowers had been previously conceptualized, but were only realized at this scale in the course of 1914-1918.
Aldous Huxley (Brave New World, 1931) and George Orwell (Nineteen Eighty-Four, 1949) bear witness to increasing awareness of the dilemma. The simultaneous ascent of the entertainment industry had its own enemies, Martin Heidegger and Walter Benjamin among them. The techno-critics had arrived to the scene.
If we think of our unbroken desire to fly, the technology that enables it exacerbates climate change. But it also provides the key to smarter living: reducing waste, developing reusable materials, and enabling communication that makes some journeys unnecessary.
World War II reproduced the same question at the existential level. The discovery of nuclear fusion by Otto Hahn, Lise Meitner and Fritz Strassmann enabled the development of nuclear bombs in 1938. Their deployment in Hiroshima and Nagasaki revealed the magnitude of the problem. Mankind’s survival was at stake. The search for an ethical agenda to contain technological advancement was on.
Hans Jonas, a German-born Jewish philosopher and student of Heidegger, provided the most drastic strategy of technological containment in 1979. His ecological imperative is a simple request to inventors and users around the globe: “Act in a way that the effects of your action are compatible with the permanence of real human life on earth.”
In the past thirty years, this formulation has provided a guideline to ecological movements and has lately made a re-entry to academic philosophy. As human-caused climate change is a threat to the permanence of real human life on earth, battling it is morally imperative. This takes us back to the initial dilemma. If we think of our unbroken desire to fly, the technology that enables it exacerbates climate change. But it also provides the key to smarter living: reducing waste, developing reusable materials, and enabling communication that makes some journeys unnecessary.
What about Google? Its CAPTCHAs have largely been replaced by non-visual ones; the contract with the Pentagon expired. But the Pentagon’s battle for Silicon Valley’s artificial intelligence has only just begun. Meanwhile, the Berlin based start-up Ecosia has created a way to fight climate change using online searches. The search engine yields just about the same results as Google and plants trees with the surplus generated from adverts. It claims to have planted more than 46 million trees as of January 2019.
That’s the dualism exemplified by two search engines. One plants trees, the other assist the military. Of course, reality is more complex. Assisting the military is not by definition reprehensible. The academic community is increasingly engaging with these questions. Stanford University, the Silicon Valley’s non-artificial brain, is shifting paradigms. In June 2018, it announced a new initiative to expand the role of ethics in its teaching of technology. As Marc Tessier-Lavigne, Stanford’s president told the Financial Times, “we are such important players, we should be doing the teaching and letting society pick up the pieces”.
An ethical recalibration of technological innovation has begun. Jonas and others provide a stepping stone into this vast territory of artificial intelligence, the Internet of Things, and algorithms. Definitive answers are improbable. But with all the artificial and non-artificial intelligence invested, there’s a good reason to remain moderately optimistic.