Technology, AI and ethics.

“Am I just one more gear in the mechanism?”

“Am I just one more gear in the mechanism?”

 

Interview with Luciano Floridi, Professor of Philosophy and Ethics of Information at the University of Oxford

Alexander Görlach:  We hear a lot about enhancements of the human body and new potentials. The concept of “human” is changing.  How do you think our idea of the “human” will be altered and modified in the years to come?

Luciano Floridi: If you want to simplify, you can see that there are currently three trends at the moment.

One is the highly commercialized view of what AI can do to transform human nature. Most of the time these transformations look incredible, they are often advertising stunts. I understand why; in this culture that we live in you need to promise a lot. You need to be very optimistic and push the boundaries of what is credible. You have to make yourself visible so that you may achieve success in terms of funding or commercial success.

The second view is a bit more reflective and a bit more cautious. It comes from a slightly more academic perspective and is more research oriented. Instead of Humanity 2.0, it’s more about enhanced, supported, facilitated human features, human behavior, and standards of living.

The third view I find particularly interesting. It’s more philosophical and reflective. It’s not about the hype in AI, it’s not about what is up and coming. It’s more about having a mirror in front of ourselves and allowing us to discover and understand our nature. In this area, there is a lot to be said and understood. I think that new forms of artificial agency will definitely transform our personal understanding of ourselves.

I think there is a lot of truth in that we don’t know yet exactly how this is going to pan out. But, when I acknowledge the past, I can imagine that it may make our lives incredibly easier and safer, longer, and more enjoyable. AI will have a huge impact on the scene in time. Imagine the level of impact when AI is going to be developed within the health sector and enhance our everyday health and well-being.

Alexander Görlach: Can you expand on what you mean by these new forms of agency?

Luciano Floridi: If you look at some of the most visible headlines about digital transformations in the last few years, you will see a lot about politics and artificial intelligence applications. In both cases there are forms of agency that are being transformed.

Take political agency: how we make decisions and influence each other’s behaviors and come to a consensus. Or even, it’s about how we develop dissenting views through propaganda and how nudging transforms ideas and inclinations. All this has been hugely influenced by artificial intelligence. Looking at this, let’s take the usual bot that you unleash during elections for example.  Clearly, the arrival of AI has started to introduce a new age where we think about what we delegate to machines.

Now, in the past we delegated anything that had to do with our own muscles, so to speak. So, this was: lifting, moving, changing, transforming, transporting, creating, and modifying the world. And now we have started delegating things that would require intelligence if done by a human being, such as making decisions, suggesting what to eat for lunch, or searching for the best route from here to there.

All of these things used to be done for us, among us, by ourselves. But this new form of artificial agency is there to give us a hand, if we design it properly.

If you see something coming at you, you start getting ready. But AI is almost like a ghost. It hits you all the time, but you just don’t see it.

Alexander Görlach: Many people already use these kinds of applications without seeing the larger picture.  To give a parallel, when cars were first invented, we had to have someone stand with a flag to show people when cars were coming. But over time, things eased out and we became used to cars and having to watch out for them.

We may be eager to use some of these applications, but we live in an age of identity where many issues revolve around our core personality and narratives. We are now in an informed bubble and are excited about these developments, but we also often oppose them.

Luciano Floridi: Going back to your point about the car, we are looking at a similar magnitude of transformation.  The novelty is similar, but it is now much less visible. When we were transforming the world through kinetic technologies such as the car, new transportation methods, and even dishwashers in the kitchen, it was immediately perceivable. That’s because it was in a 3-D analog environment.

Digital transformations are way less visible. We are analog, biological entities. Although we are spending an increasingly large amount of time online or in a digital space, we do not immediately see that we are interacting with an artificial agent.

You can go to a conference and ask someone: “Have you interacted with any AI?” And they might say, no, of course not. But then you ask them: “Have you ever followed some advice from Netflix about what to watch, bought something recommended by Amazon, or taken a photo that was impeccable even if you were moving?” These are three trivial cases of artificial agency where Netflix, Amazon, and smart phones are interacting with humans. We are already there, we just don’t see it. There isn’t a man holding a red flag telling us that we are interacting with AI.

This is crucial because it also justifies some lack of preparedness. We’re unprepared because we don’t see it all the time. If you see something coming at you, you start getting ready. But AI is almost like a ghost. It hits you all the time, but you just don’t see it. Something is hitting you, and it’s like one of those movies with an invisible man. You turn around and wonder, “Who did that? Who is responsible?” This is important because AI has the magnitude of major revolutions, such as the Industrial Revolution. But its invisibility is unprecedented.

Now, with that context, sectors like the job market, the entertainment industry, the pharmaceutical industry, and so are on are struggling with AI’s deep impact and invisibility. We talk so much about AI because it is deeply, profoundly affecting us, and yet we don’t see it and so we talk about it as if it were a ghost. It’s there, it’s slapping you, or maybe giving you a hand, and yet you don’t see it.

Alexander Görlach: I am reminded of a book from 1970, called Future Shock by Alvin Toffler and Heidi Toffler. They say there is a time where disruption and development are so fast that even the elite cannot keep up with it. Now our changes and disruption are rather exponential in terms of what we see. It isn’t logical and one step in front of the other. One invention can bring a multitude of new options in your life.

Do you think we have reached this stage predicted by the Tofflers?

Luciano Floridi: I think there is some value in this perspective. With hindsight and decades of development, we can refine it a little bit. Disruption and innovation, in a way, happen more quickly than in the past. But, if you look at some of the major innovations in society, they haven’t been so frequent.

A recent example is Apple, which has been relying on the iPhone for a number of years. There have been little things added here and there, but the big disruption of the introduction of the iPhone by Steve Jobs has not been repeated. There’s nothing like that level of magnitude or a new stage. Not to belittle the frequency of innovations, but we are looking at big changes with a magnitude of transformation.

However, what I think is true is the percolated and interrelated effects of these transformations. Sometimes people speak of twin technologies. Well, we are looking at a digital family where every innovation is another twin. When a society that is not used to mobile phones introduces the mobile phone, all of a sudden that transforms everything else. Every ten or fifteen years, something new has so many effects that it’s becoming systemic. The systemic effects have been increasingly transformative and have an impact on our environment. Each and every transformation has unexpected consequences. As far as I’m concerned, the unpredictability of transformations and innovations lies there.

It’s not so much us saying, “Who could have ever predicted face recognition software or virtual reality?” Well, anyone could have predicted it. We have been discussing this for years in research and development contexts. Virtual reality and augmented reality have been around for decades. Only now, they may finally hit the market. The discussion about them has been long, detailed, and profound.

But imagine when you put these technologies together with the one already available. Now there’s something extraordinary that comes out. Imagine when you put together something like online advertisements, artificial intelligence, and virtual reality. At that junction you have so many variables that it becomes unpredictable.

So it’s not so much the innovative steps forward that are unpredictable. Instead, it is more the horizontal intersections and interactions between them. They have become so systemic and complex in the technical sense, with variables changing each other in unbelievable ways. We sometimes cannot guess what’s going to be next, not in terms of innovation, but more in terms of interaction.

Alexander Görlach: You’re absolutely right. It works in ecosystems. In order to make something like facial recognition work, you need services that can process and transmit this high amount of data and information. That’s unrelated to the task at hand, but you still need the infrastructure. It is exponential growth, rather than linear. Previously you could see one step evolving out of another, but now you have a development in one sector that may trigger another and they become connected.

Luciano Floridi: Yes, it is the ecosystem itself that is unpredictable and the interactions that intersect it.

Alexander Görlach: When we talk about the ethics side of things, you can really manipulate, direct, and inform human behavior by the means of AI. You can say that this AI is good because it builds upon what you like, such as Amazon recommending you a new book release. But we have also seen the other way, which is not so good, such as seeing if court verdicts can be improved by looking at other cases. If you do that, then you are just prolonging your biases, such as racially-charged judgements for example.

So, what do you make of the ethical side of things? We are both aware of the Moral Machine at MIT where they are trying to figure out moral decisions and what entities self-driving cars would hit if they had to. It tried to implement an ethically-informed system onto self-driving cars.

This is the core of ethics to me: making informed decisions that can prevail in the eye of consciousness. So we may have crossed several lines just by developing new technologies.

I think there is a moral skill that comes with the facilitation of decisions taken by AI-based systems that we should be a little careful about.

Luciano Floridi: I think you’re right. I think it’s another huge topic. We are looking at something unprecedented; it is something we haven’t dealt with in the past. There a couple of dimensions here that are worth highlighting.

What is the one-to-one relationship that the individual may have with this imagined recommendation system? Do they try to help you, or try to nudge you in a particular direction, thus affecting your behavior? I think that is something in particular that we’ve been discussing for a few years, and it’s available today. It should be discussed at the same time with the social dimension in mind.

Remember what we said at the beginning about new forms of agency. Imagine the simple case of recommendation systems. They are not affecting just me, they affect thousands and thousands of people at the same time, in the same way most of the time. They are social forces.

They also affect our moral education. In terms of moral behavior and being in charge of one’s own decisions, it’s always been a struggle.

But imagine you have something difficult to do, which is making an informed, reasonable decision about a moral dilemma or moral question. That has always been difficult. Today, it’s a little bit easier to delegate. All of a sudden, this struggle, which is part of developing a moral life, is paradoxically more difficult because it is easier to avoid making difficult decisions.

Let me give you a mundane example. Let’s say a couple of friends are choosing where to go out to eat on a Friday night. Over the past, they used to talk it out. They would say something like: “You like Chinese food, I like Italian food, we need to negotiate this.” Even something silly and everyday such as this, it was still a bit of a struggle that they had to engage with some decision making process, perhaps exercising fairness, and toleration, and respect for the other. There was no way out. They had to come to a decision and modify their behavior.

But now, you can have a third-party app that recommends Chinese restaurants downtown. Well, the friends can simply delegate to the app and both be happy. So, they don’t have to struggle anymore. The absence of that struggle is part of the problem. It’s not just the behavior that’s being affected and the nudging that comes from artificial intelligence. It’s also about the removal of the moral fiction that makes each of us a little bit better at making the right decisions.

Let’s apply the analogy to a driverless car. Well, if the car doesn’t work, I have no idea how to drive anymore. I cannot make it work. I never even learned. I delegated the whole process. In a world of only horses, I don’t know about you, but I would be lost, I have no idea how to shoe a horse, not even if you showed me a YouTube video.

I think there is a moral skill that comes with the facilitation of decisions taken by AI-based systems that we should be a little careful about. I don’t want to make it a big deal, but it’s important.

So imagine now, we are back to the problem of the driverless car. In a scenario where my navigator tells me how to go from A to B, I may lose any ability to consult a map anymore. And if the navigator stops working, I may be utterly lost.

Alexander Görlach: Traditionally, you would say that the moral feature of us as human beings is that we take responsibility for our deeds. This is why we have religious and secular institutions that safeguard our moral agency. Typically in debates such as this, we talk about robots taking over. But I think you are right in that maybe we are losing what we have in the past perceived as being “human”. Even if what is to come is not dystopian, there is also a disconnect with our human past.

What is in the prospect of this development? What is the answer to Immanuel Kant’s question for the generations to come if we lose our moral agency as the core feature of what defines us as human beings?

Luciano Floridi: The keywords in this conversation are intentionality and responsibility. The moral discourse does not take off or even begin to start becoming intelligible without intentionality and responsibility.

Intentionality comes first. You cannot be responsible for something that you had no idea that you were doing. Yes, you may still be guilty, but not responsible.

Let us say there is a light switch in my home connected to a bomb and I, unknowingly, flip the switch. Maybe someone will blame me for having turned on the light in my house, but it would be silly to accuse me of mass murder just because someone evil connected my switch to a bomb. I was just turning on the light in my house. I’m just part of the mechanism. There is no moral discourse because there’s no blaming or repentance. There’s no awareness of what’s going on, therefore there is no intention and no responsibility.

Now, this, to me, for any foreseeable future and possible understanding of technologies, is absolutely crucial. There’s a narrative about what we have today that would see us dethroned from this particular position and remove us from the moral responsibility and intentionality game.

Back to Kant and the idea that humans can never be treated only as a means to achieve a goal. If I use an individual as a mechanism to do something, I am dehumanizing that individual. Such as my example with the light switch attached to a bomb, I am being used as a means and therefore I am no better than a little robot.

It would be horrible if digital technology were, perhaps inadvertently, increasingly used to transform humans into mere robots.

When I am being pushed and nudged and recommended things by AI, am I just one more gear in the mechanism? If that is what happened, that is the problem. So, it is not the removal of responsibility anymore or intentionality, but it is the transformation of the receiver of the action into just another piece of junk. At that point, I am also totally replaceable.

It’s not about you; it’s not about me. It’s about a switch or a timer or a silly human being who is going to turn it on. In a sense, I think we are looking at a very complex picture because some of the AI we are considering is turning humans into means, rather than ends, but at the same time we are also removing the context.

A person who works at a very repetitive job, always making the same movement, always the same input and the same output, that’s not really being treated as a human being. It becomes a mere means to an end. If I put a piece of AI there, now there are other problems. But on the moral side, I think we are doing something better than we were doing before. We are replacing a human immorally used as a mere means with a technology rightly used as the means it is. So it’s a very complicated picture but I think that the Kantian point can help make it clearer. The debate should be about any mistaken shift in responsibility and intentionality, but also about any correct replacement of human agency with artificial agency, whenever this is a mere means and not an end in itself.

Thank you Luciano Floridi!

Share Post
Luciano Floridi

Luciano Floridi is the Oxford Internet Institute’s Professor of Philosophy and Ethics of Information at the University of Oxford, where he is also the Director of the Digital Ethics Lab of the Oxford Internet Institute, and Professorial Fellow of Exeter College. He is a Turing Fellow of the Alan Turing Institute and Chair of its Data Ethics Group. Photo credit: Ian Scott

Show Comments