Technology, AI and ethics.

In times of disruptive change, predictions become less reliable

 

In times of disruptive change, predictions become less reliable

By Thomas Ramge

I should kick off 2020 with a few predictions for the upcoming decade, but the thing about predictions is that they’re not possible. They are even less possible in times of digital transformation because digitization has brought, in my view, an interesting ambivalence about predictions. On the one hand, big data analytics and machine learning have brought to the world a promise to make better predictions, which is fairly true in the strict, narrow, context. But funnily enough, digitization also brought a future that is much more unpredictable. In times of disruptive change, predictions become less and less reliable as there are more and more unknowns. On one hand, digitization gives us more data to mine in order to extrapolate the data of the past and present into trends that might be coming in the future. On the other hand, it is undermining the data mining and the nature of unpredictability of our times.

Having said that, I will still make some predictions or an educated guess. Based on the trends that we see now, the first educated guess I want to make is that I think within the next two or three years we will have a cooling down off of the AI hype of the last two or three years. Why is that? In more and more contexts, we see that machine learning, which is the basic technology most of you will know behind so-called artificial intelligence, is proving to be much less efficient at making predictions compared to traditional analytics: decision trees, boosting, statistical analysis.

We have in many contexts, a plateauing of machine learning. Yes, it works quite fine and adds a lot of predictability in many contexts, such as in detecting breast cancer. But there are limits to it. I think in the next two to three years we are going to start to talk about the limits of AI and machine learning.

On one hand, digitization gives us more data to mine in order to extrapolate the data of the past and present into trends that might be coming in the future. On the other hand, it is undermining the data mining and the nature of unpredictability of our times.

One of the early indicators I have is, I am helping two brilliant guys right now found a startup doing hardcore machine learning. We are deliberating on finding the right name for the startup and what they immediately decided was that they do not want AI in the domain name. They said they do not want to be put in the same bracket as “those hype riders”. I have heard this sentiment among the machine learning community several times before.

My second educated guess will be seen on the level of geopolitical impact of digital impact in the context of 5G, the technological implications of the Chinese-American trade war, and so forth. We are just seeing the beginning. There will be more and more geopolitical impacts of technology, turning into trade policy.

We already see a glimpse of it in the context of software usage. Take the Android ban of some Google applications on Chinese phones. The Chinese use that ban as a good notch to speed up building their own app stores and applications to become less and less dependent on American software. They follow up in the same context of hardware, especially in chip technology. The so-called tech Cold War might not come true in that sense, but we are definitely seeing separation and bifurcation of the world into two completely separate tech stocks, one American and one Chinese, with Europe not really knowing where to go. You will see more and more conflicts of this sort.

There will be more and more geopolitical impacts of technology, turning into trade policy.

I think we will see more and more criticism of the way big tech is using technology in order to make us addicted to it and to make money in a completely unsustainable way. Political scientists discuss how technology helps to destroy democracy, or at least how it radicalizes democratic discourse.

The debate about technology will chime into the broader debate of how do we organize societies and possibly, a more sustainable global future. In the end, technology should finally deliver the promises it has made and be a new version of enlightenment in technologically intense times.

Share Post
Thomas Ramge

Thomas Ramge is one of the best known European experts on AI, the data-economy and GDPR. His work connects the dots between data-driven technology, its impact on business and management and its consequences for society and policy-making. As a keynote speaker, he is praised for his thought-after global perspective on all things digital. Thomas has written more than a dozen books, which have been translated into more than 20 languages. "Reinventing Capitalism in the Age of Big Data" (co-authored with Oxford professor Viktor Mayer-Schönberger) and his newest book "Who is afraid of AI?" have been widely discussed worldwide and featured i.e. in The New York Times Book Review, The Harvard Business Review, and Foreign Affairs. As a writer Thomas has been honored with multiple journalism and book awards, including the Axiom Business Book Award 2019 (gold medal, economics), the getAbstract International Book Award 2018, Best Business Book of The Year 2018 on Technology and Innovation (by strategy+business), the Herbert Quandt Media Prize, the German Business Book Award and the ADC Award. Thomas is the technology correspondent for the business magazine brand eins and writes for The Economist. He also teaches at the AI Business School Zurich and serves as Chief Explaining Officer of the German-American analytics company QuantCo; a Harvard spin-off. Thomas Ramge lives in Berlin with his wife and son.

Show Comments