Technology, AI and ethics.

“Neuroscientists still cannot agree on exactly what the brain is for.”

“Neuroscientists still cannot agree on exactly what the brain is for.”

 

Interview with Moheb Costandi

Alexander Görlach: You are fascinated with what the human brain does. What did spark your interest in this subject to begin with?

Moheb Costandi: I studied psychology as a teenager in the early 1990s, and my teacher was a huge inspiration to me. He taught us about Broca’s studies of stroke patients, Sperry and Gazzaniga’s work with ‘split brain’ patients, and more well-known cases such as Phineas Gage and Henry Molaison, and I became utterly fascinated.

At around the same time, I experimented with LSD, and this, too, made me curious about how the brain works – I wondered how tiny amounts of this substance could alter consciousness so profoundly. I came home one day and told my father that I was more interested in the brain than in the ‘mind’, and he suggested that I study neuroscience. Until then, I had never heard that term before. By coincidence, University College London was about to launch its undergraduate neuroscience degree program, organized by late, great Mitch Glickstein – the first such course in the U.K., I believe, and one of the very first in the world. I applied, and was accepted into the first cohort of students. There were 12 of us on the program; we enrolled in 1995 and graduated in ’98.

Alexander Görlach: Some of the most known concepts about humans rank around assumptions about how our brain works: the “homo economicus” and “homo ludens” to refer to the two arguably most known of these concepts. However, it seems that those ideas do not really fully capture how the human brain operates. Why is it so difficult to figure out the parameters by which we operate?

Moheb Costandi: First and foremost I think it’s because of the sheer complexity of the brain. I think it was Carl Sagan who first described the brain as “the most complex object in the known universe.” This has become something of a cliché, but it’s true – the human brain contains an estimated 86 billion neurons (nerve cells), many more non-neuronal cells called glia, and perhaps a quadrillion connections between them. What’s more, unlike other disciplines such as physics and chemistry, neuroscience still lacks what we might call a ‘grand unifying theory’ of brain function. That is, neuroscientists still cannot agree on exactly what the brain is for, let alone how it works. We still don’t fully understand how individual brain cells function, let alone how distributed networks of cells work together to produce movement, speech, or consciousness. Concepts we think of as fundamental principles are regularly overturned by new research – even at the cellular level.

Personally, I don’t think we’ll ever fully understand how the brain works – there will always be some mystery surrounding it, but that’s one of the reasons it’s so much fun trying to find out.

We still don’t fully understand how individual brain cells function, let alone how distributed networks of cells work together to produce movement, speech, or consciousness.

‘Predictive coding’ is probably the closest thing we have to a grand unified theory of brain function. Very basically, the brain is constantly making predictions about reality, and about the outcomes of our actions, based on the very limited information it receives through the sense organs, and then compares its predictions to actual outcomes. By reducing the discrepancies between predictions and outcomes, its predictions – and its interpretation of the world – become increasingly more accurate.

Visual and optical illusions demonstrate this nicely – they reveal that the brain interprets ambiguous information in one way, or another, by ‘filling in the gaps’ based on context or past experiences. Related to this is the new idea that the main function of memory is not to recall the past, but rather to predict or simulate the future. We’ve known for about a hundred years that memory is reconstructive, not reproductive, in nature.

That is, the brain does not store memories like a video recorder to ‘replay’ them upon recall, but instead pieces together sensory fragments so that recall is not 100% accurate. We can, therefore, stitch together fragments of past events to simulate those that have not yet taken place, in order to determine the best course of action in any given situation. The idea that memory serves to predict the future is supported by research showing that amnesic patients have great difficulty imagining new experiences.

Alexander Görlach: If we don’t really know how the brain works, how can some claim that technology is able to elevate our behaviour?

Moheb Costandi: We don’t have to know everything about how the brain works in order to manipulate its functions. We’re still very far from a complete understanding of brain function, but we’ve learned a great deal over the past one hundred years or so, and what we’ve learned allows for a wide variety of useful interventions.

For example, deep brain stimulation has been used to treat more than 100,000 patients with Parkinson’s Disease. This technique involves implanting thin wire electrodes into subcortical brain structures, and alleviates symptoms such as tremor and muscle rigidity, even though we still don’t know exactly how it exerts its effects.

On the other hand, there are quite a few companies offering cheap DIY brain stimulation kits, which they claim can enhance intelligence, creativity, and so on, but there’s no evidence to back up these claims, and there are a number of risks involved, so I’d advise against this. Similarly, I’m very sceptical of the claim by transhumanists that technology will enhance intelligence, eliminate ageing, and so on.

Alexander Görlach: Where do you see neuroscience involving towards? What are the most crucial next steps ahead in understanding how the brain works better?

Moheb Costandi: We’ve entered the age of ‘big data’, and this century has seen the emergence of sophisticated techniques for examining brain structure and function in unprecedented detail. There’s now a preponderance of automated, high throughput methods for the collection and analysis of data of all types, for example, neuroanatomical studies of cellular morphology, multi-electrode array electrophysiological studies of circuit function, and whole-brain imaging at cellular resolution in small animals.

We’ve entered the age of ‘big data’, and this century has seen the emergence of sophisticated techniques for examining brain structure and function in unprecedented detail.

These and other methods are now generating vast amounts of data in labs around the world. The data are being collected far more quickly than they can be analysed, and there’s still very little ‘cross-talk’ between researchers in disparate neuroscientific sub-fields. Therefore, it’ll be crucial to develop new analytical methods to keep abreast of this deluge of new information, and to begin to integrate data from all levels of organization – from ‘the low’ level of genes and cells through circuit function to the highest levels of whole brain function and mental processes.

Alexander Görlach: What is intelligence in your opinion and can it ever be mimicked by technology?

Moheb Costandi: Intelligence is a hugely controversial subject, and there’s no consensus on how to define it, let alone measure it. Measures of intelligence tend to be very arbitrary, and there’s much debate on how useful our measures actually are. In neuroscientific terms, though, we might think of it as a measure of the efficiency with which the brain processes and assimilates new information, although I’m sure many would disagree with that definition.

Artificial intelligence theorists have always told us that we are on the brink of developing thinking machines, but so far all such endeavours have been unsuccessful. One might argue that current machine learning methods are approaching something like what we call ‘thinking’. These are algorithms that are developed to perform specific tasks without having specific instructions programmed into them, based on inference and recognizing patterns. Are they ‘thinking’? Perhaps, but not in the same way the human brain ‘thinks’. I don’t think it’s useful to use the computer as an analogy for how the brain works, but the sake of argument, human intelligence is the product of a machine that built itself across millions of years of evolution, and reconfigures its own architecture, based on experience, throughout its lifespan. I think it’s unlikely something like that could be mimicked by technology.

Share Post
Moheb Costandi

Moheb Costandi trained as a molecular and developmental neurobiologist and now works as a freelance writer based in London, UK. His work has appeared in Nature, Science, Scientific American, and New Scientist, among other publications, and he is the author of Neuroplasticity (MIT Press, 2016) and 50 Human Brain Ideas You Really Need to Know (Quercus, 2013).

Show Comments