Technology, AI and ethics.

“Virtue ethics asks that in order to do good, you be good.”

“Virtue ethics asks that in order to do good, you be good.”

 

Interview with Dr. Robin K. Hill

Alexander Görlach: One of your articles which I found very appealing, Ethical Theories Spotted in Silicon Valley is about ethical scenarios in Silicon Valley and what ethics to apply to a new product.

Robin K. Hill: In that article, I created a fictional scenario and assigned high tech entrepreneurs in Silicon Valley to it, and then criticized them for it. It was somewhat contrived, somewhat unfair; I imposed a stance on them. I feel that if they had a choice of ethical theories, they would choose virtue ethics. But they are not practicing virtue ethics in the way it should be practiced.
Virtue ethics requires that in order to do good, you must be good. Whereas I think in Silicon Valley, they think that they are good and therefore whatever they do is good.

Alexander Görlach: You give an example of Facebook. I think in the beginning, when you have a new product you may not be fully aware of the consequences that may entail. It’s more about the perception and notion of consequences rather than virtues.

Robin K. Hill: Yes, and yet, people focus on Mark Zuckerberg perhaps unfairly, but he is a sort of lightning rod in this area. I think that the consequences have become apparent, but he attempts to solve the problem of social media by increasing social media. I think there isn’t a sophisticated grasp to the solution of the consequences, even though the consequences might be recognized.

I do feel that high-tech, especially artificial intelligence and social media, has something in common that suffers from this flawed impetus to increase the amount of the service or platform or activity and that increasing the amount will somehow solve the problem.

Alexander Görlach: One of the virtues that Facebook had in its beginning was re-connecting people, and that’s a good value to aim for. But then somewhere on the way, you realize that it isn’t that simple. At what moment should you reflect upon what you are doing?

Robin K. Hill: I think that the whole world needs to address that problem, not just social media companies, and not just high-tech like in Silicon Valley. The question you raise is exactly the right question. At what point should it be recognized that this is not playing out with the benefits that were anticipated in the beginning?

My view is that to some extent, we should just say no. There are many attempts to answer the questions raised by deep learning and recommender systems, and meet the criticisms by saying that we just have to improve the product. But there might be something fundamentally wrong with that entire endeavor.

For example, the parole system called COMPAS was highly criticized in an article in ProPublica and got a response from the vendors of the system. COMPAS takes a lot of data about people who are applying for parole, who want to live in the community after their prison sentence. The checklist, some kind of form, is submitted for them and COMPAS determines whether to give this person parole or not.

It turned out that there was some sort of creepy racist bias inherent in the recommendations. The program does not give the data, it does not expose the data it uses. We can assume that if it was taking data from past parole recommendations, that it simply reflects the biases that were already there. I don’t think that race is an explicit characteristic that is recorded, but of course, there are proxies for it, such as neighborhood, zip code, income, schooling and those kinds of fine-grained distinctions that we understand to all be a part of a social package of class distinction.

In a sense, there is no way out of that given that particular model of processing. Given that you think past recommendations are adequate, you will keep repeating the same mistakes. If you attempt to alter that in some way or somehow budget so that the criteria are rectified, then you lose all the data. You lose the connection to the data you already have. So, my solution is don’t use them. Simply don’t use them.

Alexander Görlach: We talk about AI in the courtroom, it’s been a rather short period of time from trying that and realizing where the flaws are. We realized very quickly that we cannot use biased documents from the past for court rulings. If racial bias was present in past rulings, which we know was, we don’t want to elongate the biases in the future. At the moment, I think we have a good chance of defining what data actually would be accurate data for ethical assessments.

Robin K. Hill: Yes, but I think the point I would like to make is that philosophy can offer a perspective that addresses whether the entire model is ever going to work or if it has some inherent fatal flaw. We see these kinds of things in resume scanning.

You are probably familiar with Amazon’s resume scanning failure, or at least the fact that they gave it up because of course people tend to hire the obvious candidate. The less obvious candidate drops out and the system perpetuates itself. So how can data possibly solve that problem? That is a question I would like to see taken more seriously. Within that kind of cycle, that vicious circle of reasoning, where can additional data make a difference? Or is it time to simply move on to something else altogether?

Alexander Görlach: What would that something else be in your opinion?

Robin K. Hill: I think that something else is going back to hiring a bunch of skilled people to do it themselves.

Alexander Görlach: When you look at job requirements, with the increasing capacities of data, the search becomes more sophisticated. It becomes complicated to find the right choice. Algorithms are quicker than us at linear, simple assignments. But having more sophisticated capabilities leads to more sophisticated questions. For instance, look at the car engine nowadays. It is much more efficient than it used to be 50 years ago. I wonder if we are not in a constant iteration of our work with data.

Robin K. Hill: That is another possibility that could be explored. My research subject is the philosophy of computer science and I think that these are interesting models to pursue and to try to put in some sort of terms that makes them analyzable and somehow encapsulated so that we can talk about them and see what they are.

One of the models that I suggest is to just give up, declare failure and go back to human judgment. Human judgment is able to incorporate more enlightened views of personal talents and skills, which I do not think we can incorporate effectively just in data. So that is one thing we would want to come to terms with, or put in terms. But your idea – perhaps you are suggesting some sort of dynamic and continuous refinement through feedback – is also something that would be interesting to try to phrase and package so that we can discuss it and compare it to others.

This is an interesting part of AI from the start that has been neglected. Simple questions, such as “Is the human brain a computer?” are interesting questions, but they are not the right questions. The right questions are out there, or they are in us, but somehow they need to be addressed. We need to ask, “What does that mean?” more often. We need to ask, “What are the other possibilities?”. We need to ask what will lead us in one direction or another.

Alexander Görlach: You were saying, this is me rephrasing it, that data cannot replace our own personal quest for good. It is us human agents who have to come up with what we think is conclusive ethical behavior.

Robin K. Hill: Yes and it might be the case that data is it, that the world really is only data.

There is the idea (“It from Bit”) that existence is based on data. But I don’t think that we are going to get there directly. I think we have much more to look at, much more to consider than just trying to address that question in a yes or no fashion.

Human judgment is able to incorporate more enlightened views of personal talents and skills, which I do not think we can incorporate effectively just in data.

Alexander Görlach: We create a layer over this ontologically existing world in order to understand it better. There is the possibility that everything can be explained in mathematical terms or approximations. Technology can only do what we program it to do, but it is not going to become a human, ethical agent by itself.

Robin K. Hill: I just don’t think that ethical agency or general artificial intelligence is anywhere near realization. I know I differ on this from several famous people but I don’t think I differ in this from many computer scientists, who, like me, hear such claims and think they’re rubbish. I can’t speak for all computer scientists, of course.

I started a PhD program sometime in the 1980’s. Therefore, I have been through a couple of hype cycles about AI.

Natural language processing remains fascinating. There have been many years of effort in generative grammar and deep semantics. None of that led to the sudden emergence of a natural language speaking program. It turns out that deep learning, pattern matching, and pattern meshing yielded huge results, as we can tell by Google Translate. It’s just amazing! But it’s not the “understanding” that we had been led to anticipate.

What sort of comprehension could we assign to Google Translate? What are the qualitative differences between that and the kind of comprehension that were looking for in the first place? Some natural language processing has gone back to trying to capture the actual semantics in terms of symbols and relationships.

The question that led us for a long time is, how does this relate to us and how we use language? That might have been misleading. First of all, the assumption was that computational language has to be how we do language. The misleading assumption was that the more we find out about computational language processing, the more we will understand about our language processing. Maybe they are different things; I just don’t know. These are all questions that are in the cracks of the research area. And these questions should be articulated and investigated more widely. Every failure invites research.

Alexander Görlach: We do not fully know how a child deploys its capabilities of speaking the mother tongue. For now when we talk about how technology mimics what we do, clearly it is doing something else in a very quantitative way that we could never do. We could never read a thousand books and remember the correlation of all the words between them.

Robin K. Hill: We don’t have the capacity to be Google Translate.

Alexander Görlach: When we discuss the notion of semantics, I think about facial recognition technology and how it might decode our emotions while we speak via our facial expressions. That is another layer of creating semantics for a system that does not have semantics. We are investing so much into systems that try to mimic us step by step. Should we give them up if they do not help us?

Robin K. Hill: I can’t criticize people for investing time and effort into researching those things. Those things are very interesting. But sometimes there seems to be the inherent assumption that it is an important thing to do, or it is the path we are on, and thus we should stay on it. No, I don’t think that’s the case.

I have skepticism about AI, both its goals and its methods, whether they’re really relevant or useful or not. There are many things that the computational paradigm does not explain. In some quarters, the assumption remains that we just need more and more AI and then all of a sudden, all the blanks will be filled in.

But another problem is that – for anyone who is skeptical about AI, it seems to others like the only alternative is God or the soul or magic to explain how we function in the world. I don’t think that is the case at all, I think there is just something else we do not understand. It’s not all computation. I’m a computer scientist, I love formal systems and the mathematical part. It’s wonderful and beautiful in its own right, and yet I think that there might be something else, that’s not mystical, that we have not yet tried to delve into.

Alexander Görlach: We have many hubs that lead to lots of other questions. For instance, Bitcoin raises questions regarding governance and trust. We are in the middle of changing so much, and even though we are missing a component, still we have to figure out something. So, what is the short term solution for applying virtuous ethics?

Robin K. Hill: One solution is to recognize that with systems that work really well for their narrow purposes, we still have a choice on whether we use them or not. That is to say, systems that seem to fill a niche should be asked hard questions whether they actually work to the degree we would like them to, whether the purposes are justifiable, especially in the case of nuanced human judgments. The ethical imperative might be to say no, let’s not do that, let’s not use that.

This is, of course, an ivory tower imperative, because companies are supposed to make money. Therefore if they have a product that will bring greater revenue to their customers, am I telling them not to produce it? I don’t know. Am I telling the customers not to buy it? I think so. I don’t think the parole recommendation system should be used. I wrote another blog in the same series that you mentioned at the start (Blog@CACM) on this subject, the odd conflation of industrial momentum and social norms that leads to a tacit imperative, which I call the “Artificialistic Fallacy.”

Alexander Görlach: A recommendation to purchase something like Amazon does and the parole recommendation system have very different gravitas. We may have to decide as customers if we want to be on Amazon and give into the recommendations, whereas I feel we have no choice when it comes to the system, aside from voting for parties that will prevent things from happening. So there are different fields of application.

Robin K. Hill: Yes, and different degrees of importance. In the case of Amazon or Netflix recommending films for us, we can decide not to do that. We can decide to look elsewhere, go to the library, or ask a friend.

There are different levels of sophistication in user interfaces, as well, making some of them more visible and easier to turn down. But I think that people at every level should simply recognize that they can turn it down. They can say no.

I am surprised at how many people think they need a smart speaker. I see them in people’s houses and I wonder why. Is it really so convenient, is it so necessary? I think for some people it meets a need, but for a lot of people, it is just succumbing to marketing, which is the case for a lot of high-tech products. From the trivial to the important, people succumb to marketing and part of the marketing pressure is the idea that it’s new and remarkable. It didn’t exist 20 years ago, and therefore we should have it.

I have skepticism about AI, both its goals and its methods, whether they’re really relevant or useful or not. There are many things that the computational paradigm does not explain.

Alexander Görlach: We can refrain from certain products because we decide that we don’t need them, but we can never refrain from at least trying to be ethical beings and agents of morality. How do we secure this in a world that becomes increasingly multifaceted? How do we navigate this sea?

Robin K. Hill: I can’t give an answer to that for everyone, and I certainly cannot give an answer that would be an implementation of public policy. I am an environmentalist and I would rather see people give up their clothes dryers than instigate a law against clothes dryers, for example. I would rather see this happen voluntarily. It’s good for the character to understand choices and take responsibility for choices.

That is what I would like to see, but there is no way to make that happen and force it, to impose it on other people. The burden, however, should be on the decision-maker, be it the startup company designing the technology, the investor funding the technology, the government agency contracting for technology, or the consumer purchasing the technology; the burden should be serious consideration of the question, “Is this a good idea?”

Share Post
Robin K Hill

Dr. Hill is a lecturer in Computer Science and affiliate faculty member in both the Department of Philosophy and the Wyoming Institute for Humanities Research, all at the University of Wyoming. Her research interest is the philosophy of computer science; in particular, applying the methods of philosophy to the diverse phenomena of computing. She writes a blog on those subjects for the online Communications of the ACM. Her teaching experience includes over 30 years of logic, computer science, and information systems courses for the University of Wyoming, University of Maryland University College (European Division), State University of New York at Binghamton, Metropolitan State College, and others. She holds a Ph.D. in Computer Science, State University of New York at Buffalo; M.S. in Management Information Systems, University of Arizona; M.A. in Mathematical Logic, University of East Anglia (Great Britain); and a B.A. in Philosophy, University of Wyoming.

Show Comments