One of the biggest challenges in the machine learning space today is the problem of credit assignment: the identification of the components within the information processing pipeline and pinpointing which are responsible for any errors that crop up in the output generated by the algorithm. The common assumption is that the problem is best resolved through the use of a process known as backpropagation, but this presents significant issues. A new mechanism has now been proposed by researchers from the MRC Brain Network Dynamics Unit and Oxford Universityโs Department of Computer Science that has the potential to bridge these gaps and is capable of reproducing observed neural activity patterns in humans.
The problem of credit assignment is one that is fundamental to machine learning. One proposed theory, backpropagation, has been remarkably successful and has resulted in the advancement of artificial intelligence as well as neural mechanisms. Due to the success of backpropagation, there has been significant interest in the study of backpropagation as it applies to biological learning mechanisms. Though these models may not implement it directly, backpropagation is used as the standard against which they are approximated. However, in recent years, it has become increasingly obvious that the remarkable capacity of the biological neural system far surpasses that of backpropagation; it requires significantly fewer exposures to stimuli in order to learn a response, and information storage is far more efficient and resilient.
Instead, it is proposed that credit assignment within the brain follows an entirely different principle called “prospective configuration.” Under this, the conventional order followed by backpropagation is reversed; instead of initial synaptic weight modification succeeded by changes in neural activity, which is changed across the neural network such that the neurons can better predict target outputs, after which the synaptic weights are modified as necessary.
Prospective configuration is a principle that has been observed in various energy-based networks, such as predictive coding networks and Hopfield networks, both of which were successful in describing cortical information processing. In order to support this theory, prospective configuration has been demonstrated to yield efficient learning and reproduce the results obtained by various experiments on animal and human learning. In situations commonly faced by biological networks, like learning with limited data or in dynamic environments, prospective configuration has been shown to result in more effective and efficient learning when compared to backpropagation. Well-known patterns of behavior in humans and animals, such as fear conditioning, reinforcement learning, and sensorimotor learning, are explainable through prospective configuration as well.
Previous research, under the assumption that backpropagation was the principle followed by biological learning networks, attempted to demonstrate how energy-based networks were able to approximate backpropagation. However, in order to do so, the networks were set up in a manner that was unnatural, so neural activity could be prevented from being modified prior to synaptic weight modification. This allowed these networks to follow the sequence of backpropagation. However, when not placed under such constraints, the same networks follow a prospective configuration instead and are demonstrated to be superior in efficiency and efficacy.
For example, it is necessary for the brain to be able to predict the presence of future stimuli on the basis of present sensations โ doing so allows the brain to plan optimal behavior, allowing for better responses to various scenarios. However, if the outcome observed is not in line with what was predicted, weights across the networks must be modified so that future predictions may be more accurate. Backpropagation computes the extent of weight modification so as to minimize error, and this modification results in changes to neural activity. In contrast, prospective configuration, as the name implies, involves neural activity changing beforehand in order to better fit the observed outcomes and the weights being modified later on. This is supported by various observations of neural activity in humans and animals, where providing the outcome of a certain prediction is shown to trigger immediate changes in neural activity.
The performance of such a principle was assessed through the quantification of interference, which was found to be substantially reduced, thereby improving performance. Variants of classical machine learning problems were used as a way to assess the performance of prospective configuration as compared to backpropagation. Specifically, it was tested on problems that would be relevant in scenarios that the biological neural network would face, such as online learning, continual learning, and reinforcement learning. In all of these problems, the prospective configuration was noted to be superior to the alternative. One of the biggest advantages of this principle is its ability to identify which weights needed to be modified and its capacity to adapt to changes in the environment while being able to preserve existing knowledge; this represented a substantial improvement over backpropagation, where the replacement of older memories by new information often led to catastrophic interference.
Conclusion
The principle provides an answer to the long-standing dilemma of how the brain balances stability and plasticity by reducing interference and making compensations to different weights to ensure output consistency. Prospective configuration likely works in tandem with other mechanisms in order to assist the brain with creating new associations. Such a method can be applied to machine learning networks in order to assist with performance and efficiency. These discoveries have the potential to revolutionize machine learning methods and advance our understanding of the functioning and behaviors of the brain.
Article Source: Reference Paper | Reference Article
Learn More:
Sonal Keni is a consulting scientific writing intern at CBIRT. She is pursuing a BTech in Biotechnology from the Manipal Institute of Technology. Her academic journey has been driven by a profound fascination for the intricate world of biology, and she is particularly drawn to computational biology and oncology. She also enjoys reading and painting in her free time.