How helpful is the analogy of the brain as a computer – Actually not good!

In March 2020, Randy Gallistel (neuroscientist), standing in front of an audience, cleared his through and said, "if the brain computed the way people think, the brain would boil in a minute." All that information would overheat the brain because the second law of thermodynamics also applies to the brain.

It has been more than 20 years since the "brain as computer" analogy and the 'Cartesian error' have been rejected by many neuroscientists and philosophers (I recommend you read Damasio's Descartes' Error). The debate on what ideas we need to use to accelerate our understanding of the mind/brain's workings is in full swing.

Progress in cognitive science is crucial because it has a significant implication on how we act in the world, think, solve problems, and make decisions.

Cognitive science's main task is to determine how the mind works. The discipline's fundamental belief is that cognition is about processing information. The field looks at the brain in terms of the flow of information throughout the brain.

The central model used to study the mind (although rapidly changing) is the 'computer' metaphor. There are many historical reasons related to how the discipline has changed and evolved over the years. The main reason, however, is because of the perceived similarities between the brain and computers:

  • Both received information from the external environment
  • Both act on the received data in complex ways
  • Both manipulate symbols controlled by rules to construct, organize, interpret and transform information

Understanding and studying the 'functioning' of the brain, the concept of a computer, offers an inspiring metaphor – mental entities are like software while physical mechanisms are like hardware. 

Until the 90's, research took the analogy literally - the function of the brain is to convert stimuli into reactions:

"a machine for converting stimuli into reactions."

(William James).

In broad terms, 'computation' has been seen, where perception is the 'input,' 'action' is output, and all the things in-between are like the information processing performed by computers - the 'black box.' 

Despite this dominant perspective, there has been constant criticism over the years. Today, the view is rapidly changed, and fewer people study the brain using the computation perspective strictly.

There are many lines of arguments against the computational paradigm, such as questions of consciousness, emotion, motivation, and meaning.

Briefly considering the example of meaning, the question is - how can binary rules create 'meaning'? Large databases might give the illusion of understanding, but software cannot 'understand' the queries. We can have tons of heuristics and other tricks, but no 'understanding' is taking place. 

John Searle (philosopher) discusses the inability of computers to grasp the semantics of symbols. In Canada, Steven Harnad (Hungarian neuroscientist) suggests that meaningless symbols cannot create meaning from other such meaningless symbols of random shapes.

Computers passively respond to inputs and process data. In contrast, as György Buzsák (neuroscientist - in New York) argues, brains do not passively absorb stimuli processed by a neural code. What it does instead is it actively searches to test various options and possibilities. The brain does not represent information: it constructs it.

The notion of the black box implies we are dealing with mystery or magic. The problem is, there are no mysteries. The black box is starting to open with technology and methods like fMRI and many more. We know more today about the brain than ever before, and it is not a computer - there are no files that get deleted or saved in a hard drive in a specific place. 

We use metaphors not only to explain but also to develop conceptual foundations. Metaphors enable us to establish theories that describe the phenomena of the thing we are interested in. 

There is a growing call to eliminate the computer metaphor and replace it with a more useful metaphor.

Our instinct is to throw the baby out with the bathwater.

Not so fast. Our dualistic response might want to do that. There might be another more useful way forward.

The research into 'light' is a story of synthesis. Initially, we were grappling with two metaphors - light as a wave and light as a particle. Today we know, both analogies are false. We now think of the "wave-particle duality" of light: sometimes it behaves like a particle, and sometimes it acts like a wave. Neither analogy captures all the phenomena displayed by light; however, both are very useful in capturing the phenomena. So, to better understand what light is, we needed to move beyond the two analogies.

The same is true for the study of cognition. The truth is; there is not a single current theory of the brain. What we have is a patchwork of theories. We know that the computer metaphor has not taken us to the right answers. By adding new analogies and even a synthesis of metaphors, we might correct our over-reliance on the "mind as computer" analogy.

In more recent times, we have embraced three analogies that offer a richer possibility to explore the brain:

  • symbolicism
  • connectionism, and
  • dynamicism

The new concepts we use are a combination, some rooted in the computer and some less so. The point is that being less fixated on the one 'computer' metaphor and only seeing the brain from that dominant lens - causing problems; we are hoping to move forward faster.

The shift allows us to look at the brain as the behavior of complex, dynamic, information processing systems. Now, this, to me, is starting to make sense.

To get a glimpse of complex dynamics, adaptation, and distribution (CAS - complex adaptive systems), I recommend looking at the video: Rethinking Thinking: How Intelligent Are Other Animals? See timestamp 50.26, where Simon Garnier talks about ants.
By all means, watch the full video - it is fascinating.

Simple rules can lead to complex adaptation and behavior.
The magic happened not because of the individual units (neurons) but by the units' interaction. David Eagleman describes the brain's thinking process in the same way.

When it comes to managing complex systems such as an organization - the principles are the same.