AI Is Nothing Like a Brain, and That’s OK

AI Is Nothing Like a Brain, and That’s OK


These processes are all running across various scales, from single neurons to local networks to networks that span the entire brain and even the entire body. The ever-shifting dynamics of the nervous system are made possible by neuromodulators, a subset of neurotransmitters that are slower acting and diffused more broadly across brain regions. They are “the master switches in the brain,” according to Srikanth Ramaswamy, the head of the neural circuits lab at Newcastle University.

Neuromodulators are released from elaborate dendritic trees at the ends of some neurons, and they allow the brain to adapt to new situations over seconds to minutes. For example, cortisol release during stress primes the body for action. The system is finely tuned: Studies have shown that molecules released from different branches of the same tree can impact an animal’s behavior, such as whether a mouse runs or stops.

“You would have no idea where to put that in a neural network,” Shine said. “There is complexity hidden in neuroscience that is just inaccessible to modern neural networks because they’re constructed differently.”

Crucially, an artificial neural network is not made of physical connections like neurons in the brain. The network is abstract, residing in a world of math and calculations as algorithms that are programmed into silicon chips. It’s “basically just linear algebra,” plus some other nonlinear computations, said Mitchell Ostrow, a computational neuroscience graduate student at MIT.

To reach the complexity of even one biological neuron, a modern deep neural network requires between five and eight layers of nodes. But expanding artificial neural networks to more than two layers took decades. In deeper networks, it becomes much harder to figure out which weights the network should tweak to minimize the error in its predictions. In the 1980s, the computer scientist Paul Werbos came up with an innovation called backpropagation that solved this problem.

In 1986, Geoffrey Hinton — the so-called godfather of AI who was awarded the 2024 Nobel Prize in Physics for his work on machine learning — and his colleagues wrote an influential paper about how neural networks could be trained using backpropagation. This idea, which wasn’t directly based in neuroscience, would become key to deepening neural networks and improving their learning.

In the 1990s, computer scientists finally deepened neural networks to three layers. But it wasn’t until the 2010s, when computer scientists learned to structure their algorithms to run faster calculations simultaneously on smaller chips, that neural networks deepened to dozens and hundreds of layers.

These advances led to today’s powerful neural networks, which can surpass the human brain in certain tasks. They can be trained on billions of images or words that would be impossible for a human to analyze in a lifetime. They beat human world champions in games such as chess and Go. They can predict the structure of almost any known protein in the world with a high degree of accuracy. They can write a short story about McDonald’s in the style of Jane Austen.

However, while these abilities are impressive, the algorithms don’t really “know” things the way we do, Cobb said. “They do not understand anything.” They learn mainly by recognizing patterns in their training data; to do that, they typically need to be trained by an immense amount of it.

Meanwhile, even the simplest nervous systems in the animal kingdom have knowledge. “A maggot knows things about the outside world in a way that no computer does,” Cobb said. And one maggot is different from another maggot because each learns by interacting with and gaining information from its environment. We don’t know how to infuse machines with knowledge beyond feeding it a set of facts, he said.

Artificial neural networks are simpler and not as dynamic as the systems that give them their name. They work very well for what they’ve been designed to do, such as recognize objects or answer prompts. But they have “no way to reason” like a human brain does, Ramaswamy said. “I think adding biological detail would play a huge role in enabling this.”

And that is what he is trying to do.

Infusing Biology

Because the gears of biology, honed by evolution, have proven to work pretty well in the brain, some researchers think that artificial neural networks could be improved by returning to their inspiration and better mimicking some neurobiological features. Such brain-inspired computing, also known as neuromorphic computing, doesn’t require chemistry, Ramaswamy said. Rather, it’s possible to abstract the idea of molecules into algorithmic equivalents that work across circuits.

His team has found that infusing some diversity into the way artificial neurons behave makes neural networks work better. For example, in preliminary work published on the scientific preprint site arxiv.org in 2024, his team found that programming artificial neurons to fire at different rates improved how well their systems learned. Ramaswamy is also looking at network effects seen in biological nervous systems. This year, his team theorized that designing neural networks to include the kind of information that neuromodulators provide would improve their ability to learn continuously like the brain does.



Source link