Technology, AI and ethics.

“If we think of ourselves as someone who can actually make a difference, then there’s some motivation to change the world”

“If we think of ourselves as someone who can actually make a difference, then there’s some motivation to change the world”

 

An interview with Dr. Pak-Hang Wong, philosopher of technology in the Research Group for Ethics in Information Technology at Department of Informatics, Universität Hamburg

Alexander Görlach: What would you say are the biggest implications of ethics and AI that you see? 

Pak-Hang Wong: Artificial intelligence, as we understand it, comes with two types of ethical problems: One problem is the about replacement of human beings. This goes into the foundation of what human beings are and what they could do when AI replaces them.

The other is the question about the use of AI and the implications or consequences of using it. It involves topics such as discrimination and privacy. They are two different levels of questions; one is more ontological, the other is more about application. That is what I think currently going on in the ethics of AI.

Alexander Görlach: When you say it’s ontological, it makes me think about the history of science and philosophy. It was fruitful for people in the 16th century of think of humans as machines to coincide with recent inventions, and now we think about humans as computers. That seems to be a popular trend of thinking.

Pak-Hang Wong: Right, but that’s how we usually think of technology. It starts with us using a certain technology, then we are like that technology.

Alexander Görlach: What do you think has caused a chain of action in the past few years?

Pak-Hang Wong: One thing I think is that people are beginning to realize what they are doing actually may not really achieve what they want with regards to their vision of AI.

The other is the public backlash from all these different incidents, such as Facebook’s emotional contagion experiment, Cambridge Analytica, and fake news. I think, in the beginning, there was a rosy imaginary image of what AI can achieve, like how it will replace us, help us, and benefit us. Then once hits the market, it does not really realize the imaginary and only works along with how the markets work.

So, I guess the chain of action begins once people realize AI can be problematic, they back off from the initial imaginary image of good AI and begin to consider about what kinds of consequences AI will have.

One thing I think is that people are beginning to realize what they are doing actually may not really achieve what they want with regards to their vision of AI.

Alexander Görlach: Have we had wrong expectation management then? Clearly, we all have an experience such as typing ‘London trip’ into Google and then we receive ads for London for the next eight weeks, even though we already booked the flight.

Pak-Hang Wong: That’s not just expectation management, it is that we have an inaccurate image of AI. So, I feel when you say expectation management, you seem to suggest that there is something true to what we expect from AI, but the problem is about false expectations.

Alexander Görlach: Has there been a tipping point where you can point to and say this is what happened that changed the way of thinking about technology?

Pak-Hang Wong: That I am not sure of. Actually, I always use the analogy of climate change to describe AI and ethics. In earlier days, people denied climate change and ignored its negative consequences. But, at a certain point in time, say, in the past ten, twenty years, people started to realize climate change does actually cause real and serious problems. I think the same is true for AI.

Part of the reason is that people start to think about their children and what is going to happen to them if we let AI and other technologies dominate our lives and decisions. Earlier I have heard stories of Silicon Valley people asking their kids not to use smartphones, this is a signal for the rest of us: if the creators and makers think AI and technologies are problematic, then we probably need to think carefully about that as well.

Alexander Görlach: Now we are going into a new direction of algorithms helping us make decisions. Clearly, sometimes it is helpful, but may it eventually make us unlearn decision making and therefore, critical thinking? What is your take on that idea?

Pak-Hang Wong: I agree, and I am currently working on a book on this subject. Algorithms can be problematic, but part of the problem of the current discussion and critiques about algorithms is that they seem to be always directed at either the company (like Google) or the developers, but not so much at the users. I do think we need to raise awareness that we as users also need to do something as well.

Without that part of the reflection, the critique of AI will always be incomplete. We need more critical reflection about ourselves and we need to be able and willing to look at this type of questions as well.

With or without algorithms, we need to stress the importance of being critical about the responsibility of users of ourselves. But that is not happening yet.

Alexander Görlach: Is that the point of internet literacy or is it more ontological?

Pak-Hang Wong: It’s ontological in the sense that we need to consider the kinds of relations we have with others through machines and the kinds of beings we want to become. Knowing how machines work simply won’t do, but instead, we need to know how we can and should relate to others through machines and the kinds of beings we should be.

For example, if I do a search via Google, my search has impacts on other users as well. This relation between me and other users ought to raise a unique kind of awareness we should have for how we are using that particular algorithm. That is a form of empowerment for me. This is how we ought to engage with AI and with algorithms.

When there are enough people to change and tinker AI and algorithms, it’s not just about semantics but about the power of the community as well.

There are different things we can do. We can change how search algorithms work – that may be one example. Another, perhaps, is that we can change the infrastructure. That requires technical knowledge, of course, but it also requires individuals to have the willingness to change and tinker. So, I think it’s not just internet literacy or media literacy, or data literacy, but also the ability to change and hack the system and the willingness to do so.

Another analogy I use sometimes is that algorithms are like public policies. They limit or enable our actions in such and such ways. But public policies can often be challenged by the public. Algorithms, in most cases, are not challengeable because we either don’t know how it works or it is controlled and maintained by companies or by the algorithms themselves. So, it is important to find ways to challenge them and to open them up for deliberate discussions. But then, it is our responsibility and it returns to the point that we are not being self-critical enough.

When there are enough people to change and tinker AI and algorithms, it’s not just about semantics but about the power of the community as well.

Alexander Görlach: You mentioned the community factor earlier. How would unity have to be in place to facilitate that change?

Pak-Hang Wong: For one, I think we need to take responsibility and take it seriously. We are not just passive users, but rather active contributors to the outcomes of AI and algorithms. If we take this seriously, then it should provide us with some motivation to act.

If we just see ourselves simply as a consumer, then we are probably not going to change anything actively. But if we think of ourselves as someone who can actually make a difference, then there’s some motivation to change the world.

Alexander Görlach: One of the achievements of modernity is the ability to think of other people in different roles. The role of the consumer is not the same as the role of the citizen. If the citizen uses the means of the internet as if we are a consumer, then we land at Cambridge Analytica and all the problems that have the power to shake up the foundations of our society. We have to think more holistically and think about the impact we make.

Pak-Hang Wong: For me, that is not modernity; that is post-modernity, or that is more of the idea of hyper-modern societies. We may need collectivism, or as you said, a holistic spirit instead of hyper-individualism.

I think we need to expand ourselves. I am also doing research on Chinese philosophy, and in Chinese philosophy, the idea of self is relational, and this idea of relational self helps us think beyond individualism. When we act, we need to think about the impacts to our networks of relations. Say, when I search via Google, I need to think about the implications my search has for others as well.

Just by thinking in terms of ‘citizens’ and ‘consumers’ won’t really help, because they are not relational in the strong sense. My stress is on relations and on how to think beyond ourselves.

Alexander Görlach: Do you think Chinese philosophy is more helpful for where we are now compared to the Western framework? Is it better now to come from a more collective standpoint rather than from an individualistic one in order to facilitate the problems of having a network internet?

Pak-Hang Wong: It is a good framework to explore, but like any framework, if it goes to the extreme it would become problematic. But at this point, I do think it is useful to think beyond the Western, and very often individualistic, perspective. Whether we can just rely on the Chinese philosophy or Confucianism or other non-Western perspective is something I have reservation about.

Alexander Görlach: I am happy you brought up China because I see a variety of options where China’s future can go. It can go to a technological hub or it can turn whole cities into prisons. 

Pak-Hang Wong: They are building on a collectivist ideology, but it may go too far. If we China orients towards networks of relations as in Confucian philosophy, but, at the same time, puts sufficient emphasis on human rights, the future should be interesting.

Of course, there is always a debate about how Confucian in China is right now. But then, there is much to be learned from different traditions. And, surely, not all of China is Confucian.

Alexander Görlach: Of course, much in the same way as Europe is not Christian. It’s not as easy as saying ‘all of China’ or ‘all of Europe’. If you look into America and China, they interestingly enough have a similar approach to data. The difference being, in America it is companies who do it and in China, it is the government who oversees it. Europe has a different idea on privacy. What do you make out of these three blocks? Where is Europe’s position?

Pak-Hang Wong: I am not an expert on this, but I see three different philosophical orientations at work. For the Chinese, it is hierarchist, an authoritarian tradition where the government takes care of things for people. For the United States, it’s more of an individualist, utilitarian, free-market orientation. In Europe, it is the egalitarian thinking, in which human rights figure more prominently, and Europe asserts the value of human dignity of human rights, for example.

GDPR is grounded on this particular understanding. The different philosophical orientations seem to me an explanation of why there are different directions of thinking about how to regulate AI and data technologies.

For Europe, I see two possible futures. If Europe is sufficiently strong, and companies need to, and want to, work in here, then the companies have to abide by the European laws and regulations. The alternative would be determined by the global market – If people are only acting like consumers, then they are going to choose things that they like. So, if China or the US has created better AI, it is imaginable that they will just use the technologies developed by China and the US.

Alexander Görlach: What excites you the most when you compare traditional philosophies and apply it nowadays?

Pak-Hang Wong: I have worked on ethical issues of different technologies, what interests me most are two ways of thinking about the self and the solutions coming from that ways of understanding the self. So, for example, the individualist self often frames the problems about technologies with values that are individuialistic, such as privacy. The Confucian perspective may frame the problems in a more holistic sense.

So things that are surely unacceptable from an individualistic understanding might be acceptable from a Confucian perspective. For example, the idea of social credit systems may be viewed as a straightforward violation of individual freedom, but some may find it good because it promotes security and stability in the community.

Share Post
Pak-Hang Wong

Pak-Hang Wong is a philosopher of technology in the Research Group for Ethics in Information Technology at the Department of Informatics, Universität Hamburg, where he examines social, ethical, and political issues of Algorithms, Big Data, Artificial Intelligence, Robotics, and other emerging technologies. He is the co-editor of Well-Being in Contemporary Society (2015, Springer), and has been published in various academic journals. At present, his research focuses on digital technologies' challenges to our understanding of moral responsibility and the practice of virtue cultivation.

Show Comments