In a study published in Neurocomputing, researchers from Surrey’s Nature-Inspired Computation and Engineering (NICE) group have shown that mimicking the brain’s sparse and structured neural wiring can significantly improve the performance of artificial neural networks (ANNs) – used in generative AI and other modern AI models such as ChatGPT – without sacrificing accuracy.
The method, called Topographical Sparse Mapping (TSM), rethinks how AI systems are wired at their most fundamental level. Unlike conventional deep-learning models – such as those used for image recognition and language processing – which connect every neuron in one layer to all neurons in the next, wasting energy, TSM connects each neuron only to nearby or related ones, much like how the brain’s visual system organises information efficiently. Through this natural design, the model eliminates the need for vast numbers of unnecessary connections and computations.
An enhanced version, called Enhanced Topographical Sparse Mapping (ETSM), goes a step further by introducing a biologically inspired “pruning” process during training – similar to how the brain gradually refines its neural connections as it learns. Together, these approaches allow AI systems to achieve equal or even greater accuracy while using only a fraction of the parameters and energy required by conventional models.
Surrey’s enhanced model achieved up to 99% sparsity – meaning it could remove almost all of the usual neural connections – but still matched or exceeded the accuracy of standard networks on benchmark datasets. Because it avoids the constant fine-tuning and rewiring used by other approaches, it trains faster, uses less memory and consumes less than one per cent of the energy of a conventional AI system.
While the current framework applies the brain-inspired mapping to an AI model’s input layer, extending it to deeper layers could make networks even leaner and more efficient. The research team is also exploring how the approach could be used in other applications, such as more realistic neuromorphic computers, where the efficiency gains could have an even greater impact.
Reference: Kamelian Rad M, Neri F, Moschoyiannis S, Bauer R. Topographical sparse mapping: A neuro-inspired sparse training framework for deep learning models. Neurocomputing. 2025:131740. doi: 10.1016/j.neucom.2025.131740
This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source. Our press release publishing policy can be accessed here.
