Technology, AI and ethics.

Prometheus and the last fire


Prometheus and the last fire

by Matteo Hu

How can humankind, our societies and our ways of living, keep up with the always-changing new disruptive technologies?

To properly answer this question one would have to consider a dynamic framework composed of different standpoints: economic, legal, scientific, social, technological, philosophical and political. This article only provides briefs key-take points from a philosophical perspective.

What is the essence of technology?

The term technology derives from the Greek techne which can be translated into art, skills or crafts. This definition of technology has an implicit meaning of “something that men use for“.

We could say with confidence that technology is used for a vast variety of goals: for reading at night (e.g. oil-lamps, candles, electricity), for transporting goods (e.g. cargos, cars, ships), for curing diseases (e.g. medicines, drugs), for storing and analysing information (e.g. computers).

It is clear to come to the conclusion that technology is something created or used by men for satisfying their needs. With this new definition, one can perceive by intuition that even other items are technologies. An example that comes to mind is fire. Fire is indeed among the first technologies that have been used for millennia by men as a mean to illuminate during night-time, cooking food and warm the living surroundings; and has arguably been the most important.

Historians define the moments when humans started to build tombs for the dead and to hand down knowledge and traditions as the birth of our civilisation. If we look back more attentively, the use of fire has played a key role not only by ensuring fundamental human needs but it also allowed our ancestors to gather around at night-time and start communicating about something not specifically related to basic survival. It allowed the birth of language, communication and thus society. It is by no chance, that the ancient Greeks narrated about the myth of Prometheus stealing the Fire from the gods to give it to the mortals

Other current examples of things we incorrectly don’t consider as “technologies”, since we take them for granted or just fail to realise, are: glasses used to read or to screen from sunlight, watches to keep track of time, shoes to protect the feet, and houses to protect from the cold or the heat, etc.

The essence of technology lies in its function, in its use for satisfying humans’ needs.

Disruptive innovative technologies won’t ever come before human needs.

Technologies arise and are developed because of and after humans’ needs. Consequently, these have to be assessed and regulated. The assessment and the regulation of needs, and in a more general perspective, of humans’ behaviours are dimensions under moral philosophy.

Morality is defined as the system of values and principles concerning the distinction between right and wrong or good and bad behavior. It shows the right directions that men and individuals should orient themselves to in order to live a moral life.

Its most relevant characteristic is that it changes through time according to the contexts. For example two thousand years ago it was socially accepted and normal to have slavery, two hundred years ago the majority of national countries didn’t allow women to vote. These standpoints were affected by a multitude of heterogeneous factors which are difficult to easily comprehend without investing time to collect, study and compare available information from history.

The nature of morality is dynamic, how can we use an always-changing morality system to address heterogeneous needs from which always-changing technologies arise?

Since we cannot predict nor control new future disruptive technologies, and since they arise from humans’ needs and behaviours we can tackle them by putting attention to morality.

The answer that I suggest is to adopt a double-loop learning system. The first loop uses the rules, meanwhile, the second loop enables their modification after a revision. In other words, a double-loop system is just another way of saying to constantly revise assumptions taken for granted in order to find incoherent points and update them.

Human-Machine Hybridization:

Many could argue that such a system would be limited by humans capabilities and ultimately not be useful enough. It is obvious that if there is no human-computer interaction, and everything will be done by machines then there won’t be a need for humans.

I disagree with such a critic, as I strongly believe in a hybridization between humans and machines as a goal to reach but also as the better outcome in terms of effectiveness than it would be if we were to leave everything to machines.

Without going too much into details of Artificial Intelligence (AI), since many experts and journalists have been giving great attention to its features and to its promulgation in the most recent times, I will focus on the features of humans that AI won’t ever be able to surpass and that we should use and develop for our own sake: associative intelligence and creativity.

Humans’ Strengths:

Humans are limited in their rationality: low processing power, short duration over which can process information, small amount of information that can be collected, analysed and recalled. Moreover, humans present different forms of biases (i.e. present-bias preferences, conformity bias, halo effect, horns effect, similarity bias, attribution bias, confirmation bias, etc.).

  1. Associative Intelligence
  • Humans are able to find correlations between different subfields of knowledge in a different way compared to machines. Machines just collect much bigger amounts of data, and when processing it they try to find statistical correlations and patterns. Ideally, as the amount of data and the processing power increase then the relevance of the results become more reliable.

On the other hand, humans are trained since birth to take actions based on limited amounts of data, which often times are screened or limited by our perception and limited physical capabilities. Humans have the characteristic of findings the points of contacts between fields of knowledge much more developed and efficient compared to machines. To cite a couple of examples, current research has focused on the intersection between neuroscience and economics and of physics and economics. These new trends have given birth to new fields of knowledge: neuroeconomics and econophysics.

  • Neuroeconomics: how economic behaviour can shape our understanding of the brain, and how neuroscientific discoveries can constrain and guide models of economics (def. by Wikipedia).
  • Econophysics: applying theories and methods originally developed by physicists in order to solve problems in economics, especially the study in the financial markets (def. by Wikipedia).

Machines won’t ever be able to know these insights since in the end they just see quantities and not really qualities.

  1. Creativity

There is a subtle difference between creativity and intuition that needs to be clarified.

– Creativity: is a continuous flux inside of us that cannot be explained quantitatively and objectively.

It matches the ancient definition of the Greek philosopher Plato, that said, that intuition is an immediate knowledge deriving from the world of ideas (i.e. absolute concepts) and that they just enter ourselves.

  • Intuition: is an immediate understanding or realization of something. Intuition is knowledge that comes from having unconsciously collected and processed information.

How can we say that something has derived from intuition or creativity? Traces.

By using backward induction you should be able to find traces, and finding traces is equivalent to say that they derived from intuition instead of creativity. Since creativity cannot be derived, it is just present. Machines and humans both present intuitive activities. But only humans manifest creativity.

  1. Perception of Time

I would like to raise, not another strength of humans, but a point for further discussion and research. Humans are characterised by the fact that they can perceive the passing of time, meanwhile, machines cannot. How does the perception of the passing of time affect our reality and what role does it play on our consciousness and subconsciousness? Do we feel the passing of time because we are conscious, or does consciousness arise from the ability to perceive the passing of time?


Since its birth a few years ago, the main focus of AI has been an exploitative one. By using the double-loop learning, are we sure that it is safe to continue investing at these exponential rates on applied exploitative R&D on AI?

A few philosophers and scientists like Nick Bostrom and Bill Hibbard research about AI and human extinction scenarios. They are both known for warning the general population about the threat of the rise of a superintelligence form since it might be our biggest threat. There is a need for public education about AI and public control over it.

In particular, mathematician Bostrom said that the biggest mistake we could ever make as of now would be not to pay enough attention in trying to understand and confine AI’s applications and power. Everyone has shown to constantly increase their investment in applied and exploitative R&D, without considering the hypothetical possibility that it could imply incalculable dangers for our species.

Stephen Hawking (2014): “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. AI could offer incalculable benefits and risks, such as technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders and developing weapons we cannot even understand.” Many other scientists and well-known researchers support these claims.

What To Do Next:

  1. Promote more discussions on the dangers, instead of the pros, of Artificial Intelligence.
  2. Invest in pure preventative R&D, instead of only in applied exploitative R&D. An example is Elon Musk that has just founded a new company OpenAI. It is a non-profit organization that aims to promote and research-friendly AI to benefit the entire humanity.

Elon Musk (2018): “As AI gets probably much smarter than humans, the relative intelligence ratio is probably similar to that between a person and a cat, maybe bigger. I do think we need to be very careful about the advancement of AI.”

Fire is often seen as the representation of humans’ desire and will to achieve what they want. But we also must remember that it is an instrument in our hands. It is up to us to give the right meaning to our tools by using them accordingly to our moral beliefs. And this is possible, at least, until the fire reaches a point where it becomes too big and powerful to be controlled. At that point, it won’t be considered a technology, but a mean of destruction.

Share Post
Matteo Hu

Matteo Hu holds a BSc in Economics from Bocconi University (Italy) and is completing his MSc in Economics and Data Analytics at the same institution. He is currently interning at Department of Economics at NYU, studying multi-agents' time-inconsistent preferences. He has previously worked at Google and Industrial and Commercial Bank of China

Show Comments