A global group of scientists has found a correlation between the free energy principle, a mathematical theory, and the self-organization of neurons as learning occurs.
In the process of learning, neurons’ self-organization has been proven to be consistent with the mathematical model of the free energy principle. These insights were derived from experiments with rat neurons and provide avenues for improving our grasp of neural networks, the progression of artificial intelligence, and the comprehension of learning deficiencies.
Researchers from the RIKEN Center for Brain Science (CBS) in Japan, the University of Tokyo, and University College London have collectively proven that the self-organization of neurons in learning abides by the mathematical concept of the free energy principle. This principle precisely forecasted the natural reconfiguration of real neural networks to differentiate incoming data and the effects of modifying neural reactivity on this process.
These revelations carry significance for the creation of animal-like AI and for unraveling instances of hindered learning. The research findings are set to be revealed today (August 7) in Nature Communications journal.
Neuronal Self-Organization and Learning Dynamics
When learning to differentiate between various voices, faces, or smells, our brain’s neuron networks autonomously arrange themselves to discern the different stimuli. This mechanism includes the modification of connection strengths between neurons and is fundamental to brain-based learning.
Takuya Isomura and his international peers from RIKEN CBS lately anticipated that such network self-organization obeys the mathematical laws defining the free energy principle. The researchers tested this supposition using rat embryonic neurons in a culture dish with a microscopic electrode grid.
The experiment’s framework included cultured neurons growing on electrodes. Electrical stimulation patterns helped the neurons reconfigure to distinguish two concealed sources. Waveforms depicted spiking responses to a sensory stimulus (red line). Credit: RIKEN.
Recognizing two sensations, like voices, triggers specific neural responses. This neural network reorganization is what we refer to as learning.
Experiment Execution and Analysis
In their lab experiment, the scientists reproduced this procedure, using the electrode grid to stimulate a unique pattern that combined two distinct hidden sources in the neural network. Post 100 training iterations, neurons became selective, responding either strongly to source #1 and weakly to #2 or the other way around. The introduction of drugs to either increase or decrease neural reactivity hampered learning, reflecting the typical behavior of neurons in the functional brain.
Understanding the Free Energy Principle and Predictive Models
The free energy principle dictates that self-organization should always follow a course that minimizes the system’s free energy. To verify this as the driving factor behind neural learning, the researchers reverse-engineered a predictive model from real neural data based on this principle. After feeding the initial 10 training session data into the model, it was used to predict the subsequent 90 sessions.
The model’s predictions regarding neuronal responses and connectivity strength were consistently accurate, affirming that understanding the neurons’ initial state is sufficient to predict how learning evolves.
Implications and Looking Forward
Isomura expresses that the findings indicate the free energy principle as the self-organizing law for biological neural networks. It could foretell how learning unfolded in response to specific sensory inputs and how it was influenced by drug-induced network excitability changes.
“Though it may take some time, our approach will eventually enable the modeling of psychiatric disorder circuit mechanisms and drug effects like anxiolytics and psychedelics,” says Isomura. He also suggests that similar techniques could fuel the creation of next-generation AIs that learn as real neural networks do.
Reference: 7 August 2023, Nature Communications. DOI: 10.1038/s41467-023-40141-z.