In the network models discussed in part III and IV, each synapse has so far been characterized by a single constant parameter , called the synaptic weight, synaptic strength, or synaptic efficacy. If is constant, the amplitude of the response of a postsynaptic neuron to the arrival of action potentials from a presynaptic neuron should always be the same. Electrophysiological experiments, however, show that the response amplitude is not fixed but can change over time. In experimental neuroscience, changes of the synaptic strength are called synaptic plasticity.
Appropriate stimulation paradigms can induce changes of the postsynaptic response that last for hours or days. If the stimulation paradigm leads to a persistent increase of the synaptic efficacy, the effect is called long-term potentiation of synapses, or LTP for short. If the result is a decrease of the synaptic efficacy, it is called long-term depression (LTD). These persistent changes are thought to be the neuronal correlate of learning and memory. LTP and LTD is different from short-term synaptic plasticity such as synaptic facilitation or depression that we have encountered in Section 3.1 of Ch. 3 . Facilitated or depressed synapses decay back to their normal strength within less than a few seconds, whereas, after an LTP or LTD protocol, synapses keep their new values for hours. The long-term storage of the new values is thought to be the basis of long-lasting memories.
In the formal theory of neural networks, the weight of a connection from neuron to is considered a parameter that can be adjusted so as to optimize the performance of a network for a given task. The process of parameter adaptation is called learning and the procedure for adjusting the weights is referred to as a learning rule . Here learning is meant in its widest sense. It may refer to synaptic changes during development just as well as to the specific changes necessary to memorize a visual pattern or to learn a motor task. There are many different learning rules that we cannot all cover in this chapter. In particular, we leave aside the large class of ’supervised’ learning rules which are an important topic in the fields of artificial neural networks and machine learning. Here we focus on two other classes of learning rules that are of biological relevance.
In Section 19.1 we introduce the Hebb rule and discuss its relation to experimental protocols for Long-Term Potentiation (LTP) and Spike-Timing-Dependent Plasticity (STDP). In Section 19.2 we formulate mathematical models of Hebbian plasticity. We will see in Section 19.3 , that Hebbian plasticity causes synaptic connections to tune to the statistics of the input. Such a self-tuning of network properties is an example of unsupervised learning. While unsupervised learning is thought to be a major drive for developmental plasticity in the brain, it is not sufficient to learn specific behaviors such as pressing a button in order to receive a reward. In section 19.4 we discuss reward-based learning rules in the form of STDP modulated by reward. Reward-modulated synaptic plasticity is thought to be the basis of behavioral learning observed in animal conditioning experiments.
© Cambridge University Press. This book is in copyright. No reproduction of any part of it may take place without the written permission of Cambridge University Press.