By Terrence C. Stewart and Chris Eliasmith
We review the methods used to construct Spaun, the first biologically detailed brain model capable of performing cognitive tasks. Spaun has 2.5 million simple, spiking neurons with 60 billion connections between them. These neurons are arranged to respect known anatomical and physiological constraints of the mammalian brain. The resulting model can perform eight different perceptual, motor, and cognitive tasks (see http://nengo.ca/build-a-brain/spaunvideos for video demonstrations). We built Spaun using Nengo, our general-purpose tool set for building systems that compute using neuron-like components in a biologically constrained manner. Building such systems is critical for improving our understanding of how the brain works, and to make use of recent advances in neuromorphic hardware.
We recently described what is currently the largest functional brain model that is both biologically detailed and capable of performing a variety of cognitive tasks . This model, which we refer to as the Semantic Pointer Architecture Unified Network (or Spaun), consists of 2.5 million simulated spiking neurons whose properties and interconnections are consistent with those found in the human brain. The model receives input in the form of digital images on a virtual retina and produces output that controls a simulated arm. Spaun is able to perform eight different tasks, including digit recognition, serial working memory, pattern completion, mental arithmetic, and question answering. Furthermore, it is able to switch between these tasks based on its visual input, meaning that there are no external modifications made to the model between tasks (see http://nengo.ca/build-a-brain/spaunvideos for video demonstrations). This sort of cognitive flexibility is a hallmark of cognitive systems, but is difficult to produce with traditional neural modeling approaches.
We achieved this using the Neural Engineering Framework (NEF), a mathematical theory that provides methods for systematically generating functional, biologically plausible spiking networks . We also employ the Semantic Pointer Architecture (SPA), a hypothesis regarding some aspects of the organization, function, and representational resources used in the mammalian brain . We have made these methods highly accessible through the software tool Nengo (http://nengo.ca). The resulting NEF/SPA/Nengo combination has proved very effective for rapidly constructing biologically detailed neural models that can be related directly to specific behaviour.
The goal of this research is to understand how the brain works by reverse-engineering it. We do this by building biologically plausible models of cognitive processes. For us, these are models where the individual components (simulated neurons) can be made as similar to real neurons as desired, and where the large-scale anatomy and connectivity of the brain is respected. While Spaun uses the leaky integrate-and-fire (LIF) model of a neuron (the simplest and most common neuron model that produces spikes), all of the techniques apply to more detailed neural models. Different neurons in different brain areas have different biological and functional properties, and we use this neurological data to constrain our models. While we do not argue that this is the only way to build such models, we believe that the Neural Engineering Framework (NEF) is a general tool for implementing a very large set of algorithms in components like neurons. The Semantic Pointer Architecture (SPA) is one particular architecture that can be implemented with the NEF and we believe it is quite promising for matching human performance.
As a side effect of this reverse engineering goal, the Neural Engineering Framework provides a practical methodology for biomimetic computation. That is, it provides a method for designing computational devices that are highly parallel, redundant, and robust to variability in their own components, unlike traditional digital Von Neumann computers. For information on the mathematical and conceptual details of the methods, we point interested readers to recent descriptions of this work to  which recently appeared in the Proceedings of the IEEE.
Limitations and engineering applications
While we believe that models like Spaun are moving us towards a better understanding of brain function, there remain many challenges ahead. It is important to keep in mind that Spaun has 40,000 times fewer neurons than the human brain. Consequently, it is still not clear how well the methods of the SPA will scale, despite encouraging initial results. Similarly, Spaun includes several simplifying assumptions regarding the number and kind of neurotransmitters, and physiological properties of individual neurons. Again, past work using the same methods has incorporated a wider variety of such properties than are found in Spaun, but it remains to be seen how additional biological detail will affect Spaun’s functioning.
From a computational perspective, simulating large-scale neural models on conventional computational hardware is difficult. For Spaun, it took approximately 2.5h of simulation time to generate one second of behavior on a high-end workstation. While we believe that this simulation can be made much more efficient (and in the latest version of Nengo we have made significant improvements), it is clear that alternate computing approaches would be advantageous.
A wide variety of these brain-inspired computing devices exist, all based around the idea of having a large number of simple neuron-like components whose spiking activity is based on the sum of their inputs. We have used the NEF as a general method for programming such neuromorphic computers. Examples of using the NEF in this way can be found on efficient digital architectures employing thousands or millions of ARM cores (the SpiNNaker project), analog architectures that directly incorporate forms of learning and hybrid architectures (the Neurogrid project). There are many benefits to this new computing paradigm, including orders of magnitude better power efficiency per computation, robustness to noise and variability, and massive parallelism.
Because the NEF was developed to address systems with these same properties (but in a biological setting), it has proven an effective means of programming such hardware, by indicating what the connection weights should be to achieve different computational results. For example, the NEF has been used to control a robot that can learn by treating training examples as the function to be approximated, to do operational space control on a 3-joint arm, and to implement a model of the rat hippocampus’ path integration ability on a mobile robot. In all of these examples, the algorithms being implemented are well-suited to approximation using the NEF, and so are much more efficient when implemented on neuromorphic hardware than on traditional computing devices.
This research allows us to test psychological theories by implementing them in neurons and comparing the model performance to human (and animal) performance. Importantly, we can do this comparison based not only on the overt behavior, but also on low-level neural measurements, such as firing patterns in different brain areas. This provides strong constraints on theories of brain function. We believe accurate models of these neural functions could lead to improved understanding of how particular neural disorders (such as Parkinson’s Disease or Alzheimer’s Disease) produce their behavioural effects.
As well, this research is provides a novel method for programming highly parallel hardware. The human brain can be thought of as 100 billion interconnected processors (neurons). Each of these processors are slow, noisy, and can only compute one operation (the neural non-linearity), but by connecting them in different ways we can approximate a wide variety of functions. This is a new approach for neuromorphic engineering, and our ongoing collaborations are examining the possibilities for implementing complex cognitive algorithms efficiently.
Simulating the human brain is a monumental task and clearly beyond what a single research group can accomplish. This is why we have created general, open tools that can be applied to many different brain areas and many different kinds of task. We hope that being able to integrate ideas from psychology, neuroscience, and artificial intelligence and construct large-scale neural models that connect sensory systems, cognitive system, and motor systems makes for an exciting new approach to brain research and machine intelligence. We believe these sorts of models will be extremely beneficial for understanding human cognition, treating brain disorders, and developing efficient parallel computation.
For Further Reading
Terrence C. Stewart received a Ph.D. degree in cognitive science from Carleton University in 2007. He is a post-doctoral research associate in the Department of Systems Design Engineering with the Centre for Theoretical Neuroscience at the University of Waterloo, in Waterloo, Canada. His core interests are in understanding human cognition by building biologically realistic neural simulations, and he is currently focusing on language processing and motor control. Read more
Chris Eliasmith received a Ph.D. degree in philosophy from Washington University in St. Louis in 2000. He is a full professor at the University of Waterloo. He is currently Director of the Centre for Theoretical Neuroscience at the University of Waterloo and holds a Canada Research Chair in Theoretical Neuroscience. He has authored or coauthored two books and over 100 publications in philosophy, psychology, neuroscience, computer science, and engineering venues. Read more