They have discovered the key to learning in the human brain

Researchers from the MRC Brain Network Dynamics Unit and Oxford University’s Department of Computer Science have developed a novel theory to describe how the brain changes connections between neurons during learning. This new knowledge could guide future research on learning in brain networks and inspire faster and more robust learning algorithms in artificial intelligence.

The core of learning is identifying the information-processing pipeline components that are generating an output error. Backpropagation is a technique used in artificial intelligence that lowers output error by altering a model’s parameters. Many researchers believe that the brain employs a similar learning mechanism.

However, the organic brain works better than the most advanced machine learning technology. For example, artificial systems need to be taught the same information hundreds of times, but humans only need to view new information once. Additionally, while learning new information in artificial neural networks frequently disrupts and quickly deteriorates current knowledge, humans can acquire new information while retaining what we already know.

The researchers were inspired by these findings to determine the basic idea that the brain uses during learning. They examined a few sets of mathematical formulas that now exist and describe how neuronal behavior and synaptic connections develop over time. Scientists discovered that information-processing models employ a learning principle distinct from artificial neural networks after examining and modeling them.

Researchers suggest that the human brain adjusts synaptic connections after settling neuronal activity into an ideal state, which is beneficial for learning as it speeds up by minimizing interference and preserving prior information, unlike external algorithms that attempt to alter synaptic connections in artificial neural networks.

Researchers have developed a new learning principle called ‘prospective configuration’, which can learn tasks commonly encountered by humans and animals in nature more efficiently than artificial neural networks, as demonstrated in computer simulations published in Nature Neuroscience.

The authors describe a real bear that is fishing for salmon and can see the river, but because of an injured ear, it is unable to hear or smell it. In an artificial neural network model, this deficiency in hearing would change the connections between neurons that encode the river and the salmon, resulting in a loss of scent. After determining that there is no salmon, the bear would become hungry. A high chance of catching the salmon is ensured by the fact that the absence of sound has no effect on the animal brain’s sensitivity of its odor.

The scientists came up with a mathematical hypothesis that demonstrates how allowing neurons to settle into a projected arrangement lessens informational interference during learning. In numerous learning assessments, they demonstrated that prospective arrangements outperform artificial neural networks in explaining brain activity and behavior.

Dr. Yuhang Song, the study’s first author, argues that the current computer simulation of future configurations is slow due to the fundamental differences between machine learning and the biological brain, needing the creation of a brand-new computer or specific hardware that is modeled after the brain.

Leave a Reply

Your email address will not be published. Required fields are marked *