An interview with Dr. Pieter Buteneers, Chatlayer CTO
Alexander Görlach: Your expertise is in machine learning; what can algorithms do? What can’t they do?
Pieter Buteneers: Given enough examples, these algorithms can do whatever you want them to do. This means that ML algorithms are in theory capable of doing everything humans do, even the most complicated tasks. The catch is that they need a lot, and I really mean, a lot of examples. If a human needs to look at something only once to learn it, these algorithms need in the best case hundreds, but often over tens of thousands of examples to learn it.
Everything that is repetitive enough will be automated by these algorithms in the near future, which means that all repetitive tasks or even entire jobs will be replaced in the not so distant future.
Luckily, there are quite a few tasks that these algorithms have trouble ‘understanding’. It is specifically those tasks that require context. We humans are genetically preprogrammed to be good at quite a lot of tasks like, for example, empathy. We are really good at feeling what other people feel even if we only hear the other person on the phone. Since ML algorithms have to learn everything from examples, it will require a lot of trial and error before they will be able to respond in a way that the other person feels like he or she was correctly understood.
Another example of something humans are really good at is at creating new things from a combination of only slightly related things. And obviously we also make mistakes when we invent new things, but we are able to steer our creativity in such a way that we only need to try a couple of times at most to get it right. For ML algorithms to be able to do this, they would have to try more in the order of a million times to come even close.
Alexander Görlach: The example of the world turning into one giant coffee plantation due to an algorithm going wild has become the core example for those who want to exemplify the catastrophic impact AI will have. Is it an exaggeration or will AI kill us all if we don’t act now?
Pieter Buteneers: Luckily we don’t have to act now. Yet. AI is not at a stage yet where it can improve itself in a way that is beyond human control. We are still at least a decade away from the point in time that we call the singularity. But once we reach the point where AI can build AI faster than a human can, we are doomed.
So, in a way we have to prepare now for when that time comes so that we can make sure future AI behaves morally.
Alexander Görlach: To what extent can algorithms be moral, or let’s say, be ethically considerate?
Pieter Buteneers: The short answer here is, they can’t. Algorithms have no morals. They only optimize a specific cost function. So, if you ask an algorithm to make cheap coffee cups, it will make coffee cups out of the cheapest possible material. But it won’t care if that material is desert sand, cute puppies, or even human flesh.
It has no sense of right or wrong, it will only optimize its cost function. In order to make sure that algorithms behave in a moral way, you have to place these moral rules into its cost function.
Alexander Görlach: If we talk about what ethics AI should apply we very quickly end up discussing the different preferences that cultural heritage may give ethical judgment. What do you make of this argument?
Algorithms have no morals. They only optimize a specific cost function. So, if you ask an algorithm to make cheap coffee cups, it will make coffee cups out of the cheapest possible material. But it won’t care if that material is desert sand, cute puppies, or even human flesh.
Pieter Buteneers: With a lot of regret, I have to say that I agree with this statement. Different cultures make, in very specific cases, completely different moral judgments just because of their culture. If you don’t follow these cultural rules you will for sure get an uprising against ‘the machines’.
But as an idealist and devoted atheist, I ‘believe’ that there is a more or less uniform morality. Whether I’m right about this or not, I think that with enough education, free thinking and open discussions we can come to a more uniform moral code for humans.
The reason I believe this is that if you look at the moral judgments for all humans from very different cultures, most of their ethical choices are the same. Except for very specific topics, like for example how specific nations look at gay people or women who had an abortion. There you see that people make different judgments. But what you see is that if these societies become wealthier and better educated, their moral judgments become more similar. If this means we end up with the same moral code, and if we all become wealthy enlightened beings, I don’t know, but I am very much convinced we will come close.
Alexander Görlach: When we talk about ethical implications and the latest technological developments, we discuss the impact AI has on democracy a lot. How do you see this topic?
Pieter Buteneers: This topic really frightens me. Many people underestimate what you can do with ML. Even in databases that have been thoroughly anonymized, you can still figure out which people are in it if you cross-reference it with the right data. In the name of security, especially to avoid terrorism, countries are gathering more and more data about their citizens.
Think about all these cameras being installed in more and more cities worldwide. People think they are only going to be used to catch criminals and terrorists. But we seem to forget that it is the government who sets up these rules. Even if you live in a European democracy it can happen that a single party gets the absolute majority vote. The only thing you need is a strong leader who uses this data to manipulate public opinion and to eliminate his competition. We have seen this happening in Turkey, but because it has a Muslim majority, people often forget that Turkey had a strong democracy since the 1920s, which is much longer than many EU member states. If it is happening in a country with such a democratic history, it can also happen in your country.
You can see this already happening if you look at how Cambridge Analytica succeeded in swaying the elections in favor of Donald Trump and Brexit. What is stopping these algorithms from swaying public interest in a way that will only strengthen the political power of ruling leaders and weaken the strength of their opposition? If you think that the actions Facebook took will save us from this kind of tyranny, I can already tell you that you are too naive. You are heavily underestimating the power ML and AI.
Alexander Görlach: There are plenty of examples out there of new technologies that would infringe upon the liberties we cherish in democratic, free societies. Facial recognition may turn into a nightmare with the possibilities for gathering information on a specific, single voter, and manipulating them into voting for a specific candidate or party. Could you elaborate on that?
Many people underestimate what you can do with ML. Even in databases that have been thoroughly anonymized, you can still figure out which people are in it if you cross-reference it with the right data. In the name of security, especially to avoid terrorism, countries are gathering more and more data about their citizens.
Pieter Buteneers: Any data that is gathered about a person poses a threat because it can be cross-referenced with, for example, your voting pattern or with the type of propaganda you are sensitive to.
It is very difficult to know what information can be used and which data is worthless. Sometimes extremely unrelated things seem to be highly correlated. A recent study shows for example that people who are born in the month of January have a higher probability to become diabetic. However, no researcher has been able to come up with a somewhat acceptable explanation as to why this is the case. The same can happen with your personal data. Algorithms can find the weirdest correlations which they can use to influence your voting pattern. Maybe the color of the socks you wear in combination with your haircut is highly deterministic in predicting who you vote for or what kind of arguments you are sensitive about.
Alexander Görlach: I suppose there is a bleak and a shiny outlook, depending on how you see and evaluate new technologies. In your opinion what is the worst impact it could have on democracy and what is the most favorable outcome we could aspire towards?
Pieter Buteneers: For the worst possible outlook I think 1984 by George Orwell or the eastern German Stasi is a good source of inspiration. But with this technology, you will barely need any government officials to stay in control. You will get a true dictatorship where really only one person or maybe just a few have all the power.
Maybe I have talked too much about the doom and gloom of it all. There are a lot of good things that can be achieved with ML and AI. These algorithms have the potential to predict long beforehand when you get ill and why. This will allow you to take preventive measures before the first symptom arises. This will not only make you healthier but it will cut the cost of healthcare by more than a factor ten.
You can take this preventive power even beyond healthcare. These algorithms have the power of figuring out not only who is going to be a criminal or terrorist, but more importantly why. If you know the reasons why you can take action before it is too late. You can make sure that we live in a society without murder, rape, child abuse and so on. All that without incarcerating anybody. It all sounds like a utopia, but there is no scientific evidence that this is not possible.