Rethinking intelligence
Neuroscience and AI have both reached a phase where one helps us to better understand the other. It is time to rethink intelligence.
I used to be sceptical when people told me that the human brain is just like a computer. This is because the architectures are simply too different. However, recent advances in artificial intelligence have enabled us to simulate human-like intelligence on computers with considerable success. I suspect that another breakthrough could emerge with a new computing paradigm that is more closely aligned with the architecture of our brains.
AI has experienced many highs and lows throughout its history, with the latter often referred to as ‘AI winters’. However, over the past couple of years, we have clearly been enjoying an AI summer. In fact, the rise in temperatures is so significant that we need to rethink what intelligence really is, given that artificial intelligence can now replace or emulate many human capabilities.
Does the human brain function in the same way as a computer? Well, yes and no. Back in 2018, just before the AI spring, yours truly made the following observation:
The human brain as a computer is a powerful metaphor, but it is utterly wrong and should be abandoned as soon as possible, psychologist Robert Epstein asserts. This metaphor helps us neither to better understand how our brain works nor to more thoroughly inform our technological progress. Instead, it fosters the view of humans as entities that can (and possibly should) be emulated, replaced and superseded by machines that do everything humans can do – but better, faster, and cheaper.
It is more helpful to see humans and machines at two opposite ends of a spectrum. Computers are very good at things humans are bad at, and vice versa. In this view, the singularity (the merger of humans and machines) becomes very unlikely. Thus, the whole singularity train of thought is flawed and will inevitably end in a train wreck. Bye-bye Ray Kurzweil, see you in 2045.
Not so fast. It’s certainly true that digital computers have a very different architecture from the human brain. This stems from the earliest design decisions. Binary systems and their logic have served us well over the years, but they have their limitations:
Although GPUs and TPUs are a step in the right direction, AI infrastructure today remains hobbled by its classical legacy. We are still far from having chips with billions of processors on them, all working in parallel on locally stored data. And AI models are still implemented using sequential instructions. Conventional computer programming, chip architecture and system design are simply not brain-like. We are simulating neural computing on classical computers, which is inefficient — just as simulating classical computing with brains was, back in the days of human computation.
There is a significant performance issue with our current AI implementation, but an emerging computing paradigm could potentially solve it. The brain is highly energy-efficient yet still energy-hungry. This doesn’t mean that we can’t emulate or simulate human intelligence with today’s digital computers. For many practical purposes, the simulation is already good enough.
Is the simulation real?
And that brings us to a somewhat philosophical question: Is the simulation real? Is the artificial intelligence we know today truly intelligent? This, in turn, forces us to rethink what intelligence is. Thankfully, neuroscience can help us here. Thanks to its insights, we now have a good understanding of how the brain works. As Anil Seth said at NEXT23:
The brain is a prediction machine: all we perceive is the brain’s best guess as to the source of the inputs it is receiving. There’s no light or sound in the brain, just electrical signals. To make sense of these signals, the brain has to make some informed guesswork as to what caused these signals. This is what we experience. The brain doesn’t read out the world, it creates it.
This predictive capability of the human brain is precisely what large language models are designed to replicate. Prediction is closely linked to action. We make predictions before acting and adjust them as we go along:
Although we don’t yet fully understand the algorithms LLMs learn, we’re starting to grasp why learning to predict the next token works so well. The “predictive brain hypothesis” has a long history in neuroscience; it holds that brains evolved to continually model and predict the future — of the perceptual environment, of oneself, of one’s actions, and of their effects on oneself and the environment. Our ability to behave intentionally and intelligently depends on such a model.
Current AI implementations are limited because they are unable to learn while in operation. They are trained in advance and then essentially frozen. However, we can expect this limitation to be overcome sooner or later.
Is intelligence fundamentally social?
But what about social and emotional intelligence? As NEXT25 speaker Hannah Critchlow has pointed out, these skills underpin successful collaboration. Is social intelligence inherently human, or is intelligence fundamentally social? This is another philosophical question – but one that we don’t need to answer right now. It’s clear that humanity has only got so far through collaboration and the division of labour on a massive scale.
The “social intelligence hypothesis” holds that intelligence explosions in brainy species like ours arose due to a social feedback loop. Our survival and reproductive success depend on our ability to make friends, attract partners, access shared resources and, not least, convince others to help care for our children. All of these require “theory of mind,” the ability to put oneself in another’s shoes: What does the other person see and feel? What are they thinking? What do they know, and what don’t they know? How will they behave?
Keeping track of others’ mental states – theory of mind – is a complex cognitive task linked to social living. Research shows that both primates’ brain sizes and humans’ brain regions associated with theory of mind correlate with the size of their social groups, and having more friends is tied to better health and longevity, supporting the idea of a “social brain” shaped by evolution.
Intelligence emerges from collaboration
Theory of mind enables not just social manoeuvring but also the advanced cooperation essential for teaching, reputation management, economies, and technologies. When groups cooperate at large scales, they become more powerful, driving a major evolutionary transition: independent individuals form new, interdependent entities – like modern societies – where survival and success depend on collective effort rather than going it alone.
We are a superorganism. As such, our intelligence is already collective and, therefore, in a sense, superhuman. That’s why, when we train LLMs on the collective output of large numbers of people, we are already creating a superintelligence with far greater breadth and average depth than any single person — even though LLMs still usually fall short of individual human experts within their domains of expertise.
Interestingly enough, the brain itself is inherently social – its various cortical areas communicate and collaborate much like a community rather than functioning as a single, monolithic entity.
Within our brains, there is a constant division of cognitive labour, with specialised regions handling perception, memory, planning, empathy, and countless other tasks, all working together to create coherent thought and behaviour. This internal “collective intelligence” mirrors the kinds of group dynamics and social cooperation we see in human societies.
When we combine insights from neuroscience with advances in artificial intelligence, we gain powerful new perspectives on what intelligence truly means. Rather than viewing intelligence as the product of a lone agent, these fields suggest it emerges from collaboration – within the brain, among people, and now between humans and machines. This realisation gives us ample reason to rethink the very nature of intelligence itself.
More about intelligence
First published at nextconf.eu. Picture by Adrien Converse / Unsplash.