Here’s what you’ll learn in this story:
- Biomimetic AI seeks to copy the way a biological organism functions.
- If artificial intelligence ever operates more like the human brain, it will be far more energy efficient.
- AI can more closely mimic human brain function through a concept called active inference.
As fast as artificial intelligence can process information with its ever-expanding network of digital neurons—every line of code, every algorithm—it has yet to think with the efficiency of the human brain. Could it ever come close to processing information like our own brains and their biological neurons do? And if so, will that bring us one step closer to sentient AI that can think, feel, and perceive its own consciousness?
Biomimetic AI tries to copy how a biological organism functions, and this approach is the best bet for scientists who hope to create machines with computing power similar to the human brain. If that dream is realized, AI could someday help fill gaps in high-demand jobs such as teaching and medicine.
Turns out, artificial intelligence can more closely simulate human brain function through a model based on the concept of active inference, which is a way of understanding sentient behavior. “Active inference imbues machines with an authentic agency and an operational means to plan into the future and gauge the viability of a practical recommendation or action upon the world,” says theoretical neuroscientist Karl Friston, Ph.D., chief scientist at the Vancouver, Canada-based cognitive computing company Verses AI.
Tech entrepreneur Gabriel René founded Verses in 2018. He was inspired by the convergence of technologies like bionic limbs and next-gen virtual reality in the cyberpunk books and movies of the 1980s and 1990s because their overall take on the rise of robotics and artificial intelligence seemed mostly dystopian to him. When René was working at a research and development lab in Santa Cruz, California, there was much anticipation about how “exponentially powerful emerging technologies” would converge in the future. The question was: how could this be achieved without spiraling into dystopia?
“We started Verses with the idea of bridging these different universes,” René says. “Over time, this has evolved into not just connecting the physical and digital universe, but different fields like computer science and neuroscience.”
René dreams of making intelligence universal, with the capacity to solve problems from the particle level to the breadth of the entire planet. The vision of a powerful yet non-Dystopian AI led René to collaborate with Friston, who created a paradigm known as the free energy principle. Based on thermodynamic free energy, it would later inspire active inference. Free energy is the amount of energy in a physical system that is able to do work. Systems tend to minimize the amount of free energy and minimize the amount of thermal energy that does work. Under the free energy principle, this basically translates to AI doing the least work possible to come to a conclusion.
For instance, Friston’s AI model is designed to identify a two-dimensional spectrum of valence (positive or negative feelings) and arousal (intensity of emotion) when observing human behavior. “It’s exactly the same way our brains analyze our data,” he says. “At the time that we were developing the analysis software, we were also thinking about how the brain is working and realized that exactly the same principles that underwrote the analysis of the empirical scientific data could be applied to the way the brain makes sense of its sensory data.”
Verses created its Genius AI to think in exactly this manner. In a moment of situational awareness, the human brain makes the best guess about what is going on in the world around it, making an inference about the best course of action that should be taken and how to minimize free energy while carrying out that action. Genius is a level up from existing AI because it can pick up on sensory input, analyze which uncertainties need to be teased out, find solutions, and communicate its degree of confidence. As Friston says: “It knows what it doesn’t know.”
So does that mean Genius can read your mind? Friston believes it can—in a way, at least. While there is no crystal ball involved, Genius mimics human thinking by using prior insights to infer the insights of someone else; it also uses this information for future decisions. René looks at Genius as the AI version of the brain’s prefrontal cortex, which processes inputs from our surroundings from moment to moment before reacting to them.
“I think that the missing ingredient, the missing link for AI, is what you can think of as the executive functions of the brain,” he says. “You can think of other models as the sensory cortex, vision cortex, motor cortex, language comprehension, but the prefrontal cortex is where the executive function happens. It does reasoning, planning, regulation, and it decides how to use sensory or motor control outputs.”
Applications for Genius could range from smart search engines (which will know exactly what to look for) to smart houses and smart cities that automatically adjust to the needs of their inhabitants. There could even be autonomous robots that can carry out science experiments on other planets and moons.
Genius will also be a safer and more sustainable alternative to existing models. Most LLMs (large language models) like Gemini and Deepthink require enormous amounts of energy to perform operations—on the scale of tens of thousands of kilowatts per hour. By contrast, the human brain runs on only 20 watts of power. Genius agents also run on watts rather than gigawatts and will be able to run off the battery of a smartphone or laptop instead of plugging into the Cloud.
To René, Genius is recreating the supercomputer age of the late 1970s and early 1980s, with one supercomputer for everything. It’s a way of rewinding in order to fast-forward technology. Verses is also building a smart PC that incorporates Genius. It will have personalized intelligence coming from a network of “not just one big superintelligence, but . . . lots of little intelligences.”
Genius could eventually grow into a network of billions of AI agents that think beyond human intelligence to come up with solutions.
“Much of the philosophy at Verses is that the whole point of learning any artifact put into some niche will try to see that world by making a generative model of what’s going on,” Friston says. “In the future, there will there be a true integration of those artifacts. They’ll all learn about each other, and there will be a convergence.”
Could AI that thinks like the human brain go from cyberpunk imagination to being incarnated in the forms of all sorts of smart technology? You can wonder, but it might tell you itself.
Elizabeth Rayne is a creature who writes. Her work has appeared in Popular Mechanics, Ars Technica, SYFY WIRE, Space.com, Live Science, Den of Geek, Forbidden Futures and Collective Tales. She lurks right outside New York City with her parrot, Lestat. When not writing, she can be found drawing, playing the piano or shapeshifting.