The human brain is made up of 100 billion neurons — live wires that must be kept in delicate balance to stabilize the world’s most magnificent computing organ. Too much excitement and the network will slip into an apoplectic, uncomprehending chaos. Too much inhibition and it will flatline. A new mathematical model describes how the trillions of interconnections among neurons could maintain a stable but dynamic relationship that leaves the brain sensitive enough to respond to stimulation without veering into a blind seizure.
Marcelo O. Magnasco, head of the Laboratory of Mathematical Physics at The Rockefeller University, and his colleagues developed the model to address how such a massively complex and responsive network such as the brain can balance the opposing forces of excitation and inhibition. His model’s key assumption: Neurons function together in localized groups to preserve stability. “The defining characteristic of our system is that the unit of behavior is not the individual neuron or a local neural circuit but rather groups of neurons that can oscillate in synchrony,” Magnasco says. “The result is that the system is much more tolerant to faults: Individual neurons may or may not fire, individual connections may or may not transmit information to the next neuron, but the system keeps going.”
Magnasco’s model differs from traditional models of neural networks, which assume that each time a neuron fires and stimulates an adjoining neuron, the strength of the connection between the two increases. This is called the Hebbian theory of synaptic plasticity and is the classical model for learning. “But our system is anti-Hebbian,” Magnasco says. “If the connections among any groups of neurons are strongly oscillating together, they are weakened because they threaten homeostasis. Instead of trying to learn, our neurons are trying to forget.” One advantage of this anti-Hebbian model is that it balances a network with a much larger number of degrees of freedom than classical models can accommodate, a flexibility that is likely required by a computer as complex as the brain.
In work published this summer in Physical Review Letters, Magnasco theorizes that the connections that balance excitation and inhibition are continually flirting with instability. He likens the behavior to an indefinitely large number of public address systems tweaked to that critical point at which a flick of the microphone brings on a screech of feedback that then fades to quiet with time.
This model of a balanced neural network is abstract — it does not try to recreate any specific neural function such as learning. But it requires only half of the network connections to establish the homeostatic balance of exhibition and inhibition crucial to all other brain activity. The other half of the network could be used for other functions that may be compatible with more traditional models of neural networks, including Hebbian learning, Magnasco says.
Developing a systematic theory of how neurons communicate could provide a key to some of the basic questions that researchers are exploring through experiments, Magnasco hopes. “We’re trying to reverse-engineer the brain and clearly there are some concepts we’re missing,” he says. “This model could be one part of a better understanding. It has a large number of interesting properties that make it a suitable substrate for a large-scale computing device.”
Source: The Rockefeller University