The Hebb rule ( 19.2.1 ) is an example of a local unsupervised learning rule. It is a local rule, because it only depends on pre- and postsynaptic firing rates and the present state of the synapse, i.e., information that is easily ‘available’ at the location of the synapse. Experiments have shown that, not only the firing rates, but also the membrane voltage of the postsynaptic neuron, as well as the relative timing of pre- and postsynaptic spikes determine the amplitude and direction of change of the synaptic efficacy. In order to account for spike timing effects, classical pair-based models of STDP are formulated with a learning window that consists of two parts: If the presynaptic spike arrives before a postsynaptic output spike, the synaptic change is positive. If the timing is reversed, the synaptic change is negative. However, classical pair-based STDP models neglect the frequency and voltage dependence of synaptic plasticity which are included in modern variants of STDP models.
The synaptic weight dynamics of Hebbian learning can be studied analytically if weights are changing slowly as compared to the time scale of the neuronal activity. Weight changes are driven by correlations between pre- and postsynaptic activity. More specifically, simple Hebbian learning rules in combination with a linear neuron model find the first principal component of a normalized input data set. Generalized Hebb rules, such as Oja’s rule, keep the norm of the weight vector approximately constant during plasticity.
The interesting aspect of STDP is that it naturally accounts for temporal correlations by means of a learning window. Explicit expressions for temporal spike-spike correlations can be obtained for certain simple types of neuron model such as the linear Poisson model. Spike-based and rate-based rules of plasticity are equivalent as long as temporal spike-spike correlations are disregarded. If firing rates vary slowly, then the integral over the learning window plays the role of the Hebbian correlation term.
Hebbian learning or STDP are examples of unsupervised learning rules. Hebbian learning is considered to be a major principle of neuronal organization during development and a driving force for receptive field formation. However, Hebbian synaptic plasticity is not useful for behavioral learning, since it does not take into account the success (or failure) of an action. Three-factor learning rules combine the two Hebbian factors (i.e. pre- and postsynaptic activity) with a third factor (i.e., a neuromodulator such as dopamine) which conveys information about an action’s success. Three factor rules with an eligibility trace can be used to describe behavioral learning, in particular during conditioning experiments.
Correlation-based learning can be traced back to Aristoteles 99 Aristoteles, ”De memoria et reminiscentia”: There is no need to consider how we remember what is distant, but only what is neighboring, for clearly the method is the same. For the changes follow each other by habit, one after another. And thus, whenever someone wishes to recollect he will do the following: He will seek to get a starting point for a change after which will be the change in question. and has been discussed extensively by James ( 242 ) who formulated a learning principle on the level of ‘brain processes’ rather than neurons:
When two elementary brain-processes have been active together or in immediate succession, one of them, on re-occurring, tends to propagate its excitement into the other.
A chapter of James’ book is reprinted in volume 1 of Anderson and Rosenfeld’s collection on Neurocomputing ( 25 ) . The formulation of synaptic plasticity in Hebb’s book ( 210 ) of which two interesting sections are reprinted in the collection of Anderson and Rosenfeld ( 25 ) has had a long-lasting impact on the neuroscience community. The historical context of Hebb’s postulate is discussed in the reviews of Sejnowski ( 466 ) and Makram et al. ( 326 ) .
Classical experimental studies on STDP are ( 328; 567; 117; 55; 56; 483 ) , but precursors of timing dependent plasticity can be found even earlier ( 295 ) . Note that for some synapses, the learning window is reversed ( 43 ) . For reviews on STDP, see Abbott and Nelson ( 3 ) ; Bi and Poo ( 54 ) ; Caporale and Dan ( 89 ) ; Sjöström and Gerstner ( 482 ) .
The theory of unsupervised learning and principal component analysis is reviewed in the textbook by Hertz et al. ( 215 ) . Models of the development of receptive fields and cortical maps have a long tradition in the field of computational neuroscience; see, e.g., von der Malsburg ( 540 ) ; Willshaw and von der Malsburg ( 550 ) ; Sejnowski ( 467 ) ; Bienenstock et al. ( 58 ) ; Kohonen ( 271 ) ; Linsker ( 300 ) ; Miller et al. ( 346 ) ; MacKay and Miller ( 318 ) ; Miller ( 344 ) ; for reviews see, e.g., Erwin et al. ( 144 ) ; Wiskott and Sejnowski ( 556 ) . The essential aspects of the weight dynamics in linear networks are discussed in Oja ( 369 ) ; Miller and MacKay ( 343 ) . Articles of Grossberg ( 200 ) and Bienenstock et al. ( 58 ) or the book of Kohonen ( 271 ) illustrate the early use of the rate-based learning rules in computational neuroscience.
The early theory of STDP has been developed by ( 178; 176; 256; 442; 530; 489; 448 ) but precursors of timing-dependent plasticity can be found in earlier rate-based formulations ( 216; 488 ) . Modern theories of STDP go beyond the pair-based rules ( 468; 394 ) , consider voltage effects ( 99 ) , variations of boundary conditions ( 202 ) or Calcium-based models ( 302; 303 ) ; for reviews see Morrison et al. ( 353 ) ; Sjöström and Gerstner ( 482 ) .
Experimental support for three-factor learning rules is reviewed in ( 429; 391 ) . Model studies to reward modulated STDP are ( 238; 294; 153; 157 ) . The consequences for behavior are discussed in ( 306; 307 ) . The classic reference for dopamine in relation to reward-based learning is Schultz et al. ( 460 ) . Modern reviews on the topic are ( 461; 462 ) .
Normalization of firing rate.
Consider a learning rule i.e., a change of synaptic weights can only occur if the presynaptic neuron is active ( ). The direction of the change is determined by the activity of the postsynaptic neuron. The postsynaptic firing rate is given by . We assume that presynaptic firing rates are constant.
(i) Show that has a fixed point at .
(ii) Discuss the stability of the fixed point. Consider the cases and .
(iii) Discuss whether the learning rule is Hebbian, anti-Hebbian, or non-Hebbian.
Fixed point of BCM rule . Assume a single postsynaptic neuron which receives constant input at all synapses .
(i) Show that the weights have a fixed point under the BCM rule ( 19.9 ).
(ii) Show that this fixed point is unstable.
Receptive field development with BCM rule . 20 presynaptic neurons with firing rates connect onto the same postsynaptic neuron which fires at a rate . Synaptic weights change according to the BCM rule ( 19.9 ) with a hard lower bound and Hz.
The 20 inputs are organized in two groups of 10 inputs each. There are two possible input patterns , with .
(i) The two possible input patterns are: - group 1 fires at 3Hz and group 2 is quiescent; and - group 2 fires at 1Hz and group 1 is quiescent. Inputs alternate between both patterns several times back and forth. Each pattern presentation lasts for . How do weights evolve? Show that the postsynaptic neuron becomes specialized to one group of inputs.
(ii) Similar to (i), except that that the second pattern now is : group 2 fires at 2.5Hz and group 1 is quiescent. How do weights evolve?
(iii) As in (ii), but you are allowed to make a function of the time-averaged firing rate of the postsynaptic neuron. Is a good choice? Why is a better choice?
Hint: Consider the time it takes to update your time-averaged firing rate in comparison to the presentation time of the patterns.
Weight matrix of Hopfield model. Consider synaptic weights that change according to the following Hebbian learning rule: (i) Identify the parameters and with the parameters of Eq. ( 19.2.1 ).
(ii) Assume a fully connected network of neurons. Suppose that the initial weights vanish. During presentation of a pattern , activities of all neurons , are fixed to values , where and synapses change according to the Hebbian learning rule. Patterns are applied one after the other, each for a time . Choose an appropriate value for so that after application of patterns, the final weights are . Express the parameter by .
(iii) Compare your results with the weight matrix of the Hopfield model in Chapter 17 . Is the above learning procedure realistic? Can it be classified as unsupervised learning?
Hint: Consider not only the learning phase, but also the recall phase. Consider the situation where input patterns are chosen stochastically.
PCA with Oja’s learning rule . In order to show that Oja’s learning rule ( 19.7 ) selects the first principal component proceed in three steps.
(i) Show that the eigenvectors of are fixed points of the dynamics.
Hint: Apply the methods of Section 19.3 to the batch version of Oja’s rule and show that
(19.49) |
The claim then follows.
(ii) Show that only the eigenvector with the largest eigenvalue is stable.
Hint: Assume that the weight vector has a small perturbation in one of the principal direction. Derive an equation for and show that the perturbation grows if .
(iii) Show that the output rate represents the projection of the input onto the first principal component.
Triplet STDP rule and BCM . Show that for Poisson spike arrival and output spike generated by an independent Poisson process of rate , the triplet STDP model gives rise to a rate-based plasticity model identical to BCM. Identify the function in Eqs. ( 19.8 ) and ( 19.9 ) with the parameters of the triplet model in ( 19.15 ).
Hint: Use the methods of Section 19.3.3 . Independent Poisson output means that you can neglect the pre-before-post spike correlations.
© Cambridge University Press. This book is in copyright. No reproduction of any part of it may take place without the written permission of Cambridge University Press.