Technology, AI and ethics.

“Our human brains can no longer really process all of the data that we have about this world.”

“Our human brains can no longer really process all of the data that we have about this world.”

 

Interview with Andrea Martin, Leader of IBM Watson IoT Center in Munich

Alexander Görlach: When it comes to AI and trust, what are the key components?

Andrea Martin: Why I want to talk about AI and trust is because I think without trust, we will not see an acceptance of AI in businesses and society. Therefore, trust is at least part of the prerequisites that you need for acceptance. It’s a business requirement, it’s a societal requirement, sometimes it’s even a legal requirement in order to figure out how you can trust AI and the recommendations that come out of it. From a company perspective, if there is no acceptance of AI, there is no making money from AI, which is a core part of our portfolio.

Alexander Görlach: In the sphere outside of technology, we seem to be having a major trust crisis. On social media and when it comes to societal frameworks, people keep saying that we do not trust each other the way we used to.

Andrea Martin: That is an interesting observation because I sit as an expert in the Commission for AI of the German Parliament. We had one session about trust and transparency; a speaker talked about a couple of studies they had done in their institute. To a certain extent, an increase in transparency increases trust, but there is a threshold. If you add even more transparency, trust goes down again, which I found quite interesting.

I could imagine based on these test results that there is so much visibility about what every one of us does with social media and other channels, that maybe it is too much transparency and our trust in each other has declined.

Alexander Görlach: When I talk about skepticism and the lack of trust in these technologies, it involves the topic of accountability. Is that something that you look into in the ethical department of the government or at IBM?

Andrea Martin: Yes, in both. When we talk about trust and transparency, there are a few areas that we look into. For IBM, we have identified four elements.

One element is explainability that goes back to the algorithms, but also explainability of the results. There are different dimensions and levels. For instance, do you want to explain the whole model or do you just want to explain a single recommendation that comes out of the model?

Another element is fairness, or anti-bias. It involves validating the data that you feed into AI systems or solutions.

The third element is accountability and the assurance of knowing where the data comes from and who provided it.

The fourth element is robustness. How robust are the solutions in terms of ensuring their security? So that no one can manipulate both the data and the recommendations or the results. Can anyone access the results and misuse or steal them? These are the areas we look at.

We discuss transparency and trust in a similar way in the commission, and on an EU level, there is a high-level expert group that looks into the topic of trustworthy AI. They have come up with a document in April 2019 which includes excellent guidelines. It’s currently in a pilot phase, collecting feedback. They will rework it and publish it again sometime next spring. The areas and requirements which they identify are pretty much in line with the aforementioned, but they also point out security as a separate element. According to them, humans should have the last control point, but it is very similar to our company-endorsed guidelines.

To a certain extent, an increase in transparency increases trust, but there is a threshold. If you add even more transparency, trust goes down again, which I found quite interesting.

Alexander Görlach: We talk about data and its abundance, and the notion of digital twins. This is an infrastructure question: How much data can we store on top of that? There is a lot of discussion on how to find the cheapest storage place.

Andrea Martin: Regarding digital twins and replicating the real world digitally: We use this so that you can perform simulations, find the next best action plan, do predictive analytics, and so on. We have to do this because I think our human brains can no longer really process all of the data that we have about this world. So we need something like augmented intelligence (I’d rather translate AI as augmented intelligence, rather than artificial intelligence) to help our brains process everything.

We now need AI because we created all of this data and we have to use AI in order to cope with it. We need technology to help us digest and make sense of it. Which is the chicken and which is the egg in this problem?

Alexander Görlach: Is there a symbiosis of man and machine, or man and data?

Andrea Martin: I would not go so far as to say it is a symbiosis. However, it is definitely men and machine, not man versus machine. My opinion is that AI solutions will help us do our jobs better, make better decisions, come to new insights and interact in an optimized way with our customers and employees. It really is an augmentation of my own human intelligence by using something that mimics my intelligence and cognitive capabilities but is capable of digesting much more data than my brain.

Alexander Görlach: When we look at the data aspect, let’s say regarding climate change, we have all the data but we don’t actually act upon it. Does this just show an incompatibility of technology and human behavior?

Andrea Martin: I agree and I would argue that it was always the case, now we just have even more data and more insights that should push us even more in the right direction, but we still don’t do it. I think it was always the case because the human being is not just a rational being, but it’s also an emotional being and a being of habits.

Directional thinking is just one aspect of being a human and therefore I can understand, although it may not be logical, that we know what we should not do, but we still do it. We do it because we like it, or it tastes good, or it is convenient, whatever it may be.

Alexander Görlach: Is the data that we receive from augmented intelligence only cognitive support?

Andrea Martin: Yes, I would say so. The question is, what else would you add as the additional categories? But yes, it is cognitive support. It helps us prepare decisions and gain new insights.

Alexander Görlach: What about the dangers of a spillover? We portray data as something sacred and objective, but we know there is a huge spillover of our human fallibility, causing biases.

Andrea Martin: I think you are right. I would say there is a question of values on one hand. The second question is figuring out which data is correct and which one is not correct because you could train an algorithm or an AI solution using completely wrong data that does not tell the truth. Then, of course, all recommendations created from that system will not help us in any way.

So I think the big questions are figuring out which data is right and which is fake, deciding if data is biased, and who decides what bias is.

I would not go so far as to say it is a symbiosis. However, it is definitely men and machine, not man versus machine.

Alexander Görlach: Who decides the bias? In the democratic world, is it a country’s Constitution?

Andrea Martin: Yes but no bias is easy to figure out. Gender equality may be a valid and constitutional requirement in the democratic world, but it may not be every single country in the world, so that is something we have to figure out, especially if you are a company that operates globally.

When do you apply which metrics to figure out if something is biased or not? There are open source repositories available that we publish together with the open source community that includes dozens of metrics to measure bias, but then you have to figure out which of these metrics do you apply and in which context? Would a particular country be happy if you apply the metric of figuring out if the data is gender neutral, for example?

I think it’s important to make data more visible and transparent. The question is, how do you make people aware of this? We have one solution out in the market that helps you detect bias in data while the system runs.

For example, in an insurance case, the solution may tell you that this case looks like fraud and it used the criteria of the zip code, driver age, and police involvement. It combined all of this data and the case looks like fraud. The person sitting in front of the screen may think that they did not use the right criteria because it may lead them in a certain direction.

The other thing that the system can do is say that it looks like fraud, but if it looks at its data basis, it only has three cases on which it built this recommendation. Three cases may not be enough because the sample is not big enough.

Yet, the interesting thing is that this is not just a question that comes up with AI. If you look at things such as what the US does with credit history, it has always relied on data. We should have asked these questions years ago, because it was the same thing, basically.

Alexander Görlach: It’s an academic thing, in which to understand something better you have to see where it starts differentiating. When I talk to people involved in the ethics of algorithms, they say it is about computer scientists ultimately.

Andrea Martin: It’s a responsibility. The person in front of the screen must know how to use the solution and keep the skills and the capabilities to make decisions on their own without the help of the system. So we need to teach people that get a lot of support through systems how to still be able to make their own decisions in their area of expertise without relying too much on the systems.

The other thing is that I think it’s necessary to look at the developers and the people who trained the system. What does it mean to create ethical, trustworthy, and transparent AI solutions? What are the criteria? What are the requirements that you should look after? When should I know that my system is explainable, fair, or robust?

Those who train the system and provide the data have a responsibility because the data used to train the system will influence the end recommendations.

I think it’s important to make data more visible and transparent. The question is, how do you make people aware of this?

Alexander Görlach: Would you emphasize teaching critical thinking? It’s not just coding, you also need knowledge and context about the world. How do we achieve that?

Andrea Martin: When you talk about education and teaching, you need skills on how to apply AI. You need to have critical thinking skills. It’s very broad and has different dimensions. I heard a quote from someone who organized an AI event, he said in his opening that he is not afraid of machines that think more and more, but of humans who think less and less.

Alexander Görlach: Where are we realistically headed?

Andrea Martin: I think that if we manage, and it is really an if and not when, to come to the conclusion what we define as trustworthy AI, then we will see immense development and implementation of AI solutions in every single industry. That means it will have an impact on our personal life, the apps that we use, and in an enterprise context.

What I see at the moment that may be the backside is that there are so many different efforts going on to think about what kind of standards and norms we need. Do we need someone to examine all the algorithms and look at the data? What about data privacy? There are so many different efforts looking at regulations, guidance, and the direction of trustworthy AI that it may become quite confusing if we don’t put all our heads together and find what we can agree upon. Things need to evolve, but we are at a good starting point.

Share Post
Andrea Martin

Andrea Martin is Leader of the IBM Watson IoT Center in Munich and responsible for its scope and market relevance. Before taking over this role in July 2019, she was Chief Technology Officer (CTO) for IBM Germany, Austria and Switzerland and President, IBM Academy of Technology.

In her role, Andrea Martin uses her experience and global network from more than 25 years in the international services business, which also provides valuable input for her activities as an expert in the Commission for AI of the German Parliament (initiated in September 2018).

Andrea Martin has started her career at IBM in 1992 after earning her Master’s degree (diploma) in Applied Mathematics from the University in Karlsruhe, Germany.

Show Comments