17 Memory and Attractor Dynamics

17.3 Memory networks with spiking neurons

The Hopfield model is an abstract conceptual model and rather far from biological reality. In this section we aim at pushing the abstract model in the direction of increased biological plausibility. We focus on two aspects. In Section 17.3.1 we replace the binary neurons of the Hopfield model with spiking neuron models of the class of Generalized Linear Models or Spike Response Models; cf. Chapter 9. Then, in Section 17.3.2 we ask whether it is possible to store multiple patterns in a network where excitatory and inhibitory neurons are functionally separated from each other.

17.3.1 Activity of spiking networks

Neuron models such as the Spike Response Model with escape noise, formulated in the framework of Generalized Linear Models, can predict spike times of real neurons to a high degree of accuracy; cf. Chapters 9 and 11. We therefore choose the Spike Response Model (SRM) as our candidate for a biologically plausible neuron model. Here we use these neuron models to analyze the macroscopic dynamics in attractor memory networks of spiking neurons.

As discussed in Chapter 9, the membrane potential uiu_{i} of a neuron ii embedded in a large network can be described as

ui(t)=fη(t-ti(f))+hi(t)+urest\displaystyle u_{i}(t)=\sum_{f}\eta(t-t_{i}^{(f)})+h_{i}(t)+u_{\rm rest} (17.33)

where η(t-ti(f))\eta(t-t_{i}^{(f)}) summarizes the refractoriness caused by the spike afterpotential and hi(t)h_{i}(t) is the (deterministic part of the) input potential

hi(t)=jwijϵ(t-tj(f))=jwij0ϵ(s)Sj(t-s)ds.h_{i}(t)=\sum_{j}w_{ij}\epsilon(t-t_{j}^{(f)})=\sum_{j}w_{ij}\int_{0}^{\infty}% \epsilon(s)\,S_{j}(t-s){\text{d}}s\,. (17.34)

Here ii denotes the postsynaptic neuron, wijw_{ij} is the coupling strength from a presynaptic neuron jj to ii and Sj(t)=fδ(t-tj(f))S_{j}(t)=\sum_{f}\delta(t-t_{j}^{(f)}) is the spike train of neuron jj.

Statistical fluctuations in the input as well as intrinsic noise sources are both incorporated into an escape rate (or stochastic intensity) ρi(t)\rho_{i}(t) of neuron ii

ρi(t)=f(ui(t)-ϑ),\rho_{i}(t)=f(u_{i}(t)-\vartheta)\,, (17.35)

which depends on the momentary distance between the (noiseless) membrane potential and the threshold ϑ\vartheta.

In order to embed memories in the network of SRM neurons we use Eq. (17.27) and proceed as in Section 17.2.6. There are three differences compared to the previous section are: First, while previously SjS_{j} denoted a binary variable ±1\pm 1 in discrete time, we now work with spikes δ(t-tj(f))\delta(t-t_{j}^{(f)}) in continuous time. Second, in the Hopfield model a neuron can be active in every time step while here spikes must have a minimal distance because of refractoriness. Third, the input potential hh is only one of the contributions to the total membrane potential.

Despite these differences the formalism of Section 17.2.6 can be directly applied to the case at hand. Let us define the instantaneous overlap of the spike pattern in the network with pattern μ\mu as

mμ(t)=12a(1-a)Nj(ξjμ-a)Sj(t)m^{\mu}(t)={1\over 2a(1-a)N}\sum_{j}(\xi_{j}^{\mu}-a)\,S_{j}(t) (17.36)

where Sj(t)=fδ(t-tj(f))S_{j}(t)=\sum_{f}\delta(t-t_{j}^{(f)}) is the spike train of neuron jj. Note that, because of the Dirac δ\delta-function, we need to integrate over mμm^{\mu} in order to arrive at an observable quantity. Such an integration is automatically performed by each neuron. Indeed, the input potential Eq. (17.34) can be written as

hi(t)\displaystyle h_{i}(t) =\displaystyle= j(12a(1-a)Nμ=1M(ξiμ-b)(ξjμ-a))0ϵ(s)Sj(t-s)ds\displaystyle\sum_{j}\left({1\over 2a(1-a)N}\sum_{\mu=1}^{M}(\xi_{i}^{\mu}-b)% \,(\xi_{j}^{\mu}-a)\,\right)\int_{0}^{\infty}\epsilon(s)\,S_{j}(t-s){\text{d}}s (17.37)
=\displaystyle= μ=1M(ξiμ-b)0ϵ(s)mμ(t-s)ds\displaystyle\sum_{\mu=1}^{M}(\xi_{i}^{\mu}-b)\,\int_{0}^{\infty}\epsilon(s)m^% {\mu}(t-s)\,{\text{d}}s

where we have used Eqs. (17.27) and (17.36).

Fig. 17.10: A population of excitatory neurons interacts with two populations of inhibitory neurons. Memory patterns are embedded as Hebbian assemblies in the excitatory population. All neurons are integrate-and-fire neurons. Theory predicts that the first inhibitory population should be activated to levels where the gain function (left inset) is approximately linear. The second inhibitory population is activated if the total input is above some threshold value (right inset).

Thus, in a network of NN neurons (e.g. N=100 000N=100\,000) which has stored MM patterns (e.g., M=2000M=2000) the input potential is completely characterized by the MM overlap variables which reflects an enormous reduction in the complexity of the mathematical problem. Nevertheless, each neuron keeps its identity for two reasons:

(i) Each neuron ii is characterized by its ‘private’ set of past firing times ti(f)t_{i}^{(f)}. Therefore each neuron is in a different state of refractoriness and adaptation which manifests itself by the term fη(t-ti(f))\sum_{f}\eta(t-t_{i}^{(f)}) in the total membrane potential.

(ii) Each neuron has a different functional role during memory retrieval. This role is defined by the sequence ξi1,ξi2,ξiM\xi_{i}^{1},\xi_{i}^{2},\dots\xi_{i}^{M}. For example, if neuron ii is part of the active assembly in patterns μ=3,μ=17,μ=222,μ=1999\mu=3,\mu=17,\mu=222,\mu=1999 and should be inactive in the other 1996 patterns, then its functional role is defined by the set of numbers ξi3=ξi17=ξi222=ξ1999=1\xi_{i}^{3}=\xi_{i}^{17}=\xi_{i}^{222}=\xi^{1999}=1 and ξiμ=0\xi_{i}^{\mu}=0 otherwise. In a network that stores MM different patterns there are 2M2^{M} different functional roles so that it is extremely unlikely that two neurons play the same role. Therefore each of the NN neurons in the network is different!

However, during retrieval we can reduce the complexity of the dynamics drastically. Suppose that during the interval t0<t<t0+Tt_{0}<t<t_{0}+T all overlaps are negligible, except the overlap with one of the patterns, say pattern ν\nu. Then the input potential in Eq. (17.37) reduces for t>t0+Tt>t_{0}+T

hi(t)=(ξiν-b)0ϵ(s)mν(t-s)dsh_{i}(t)=(\xi_{i}^{\nu}-b)\,\int_{0}^{\infty}\epsilon(s)m^{\nu}(t-s)\,{\text{d% }}s (17.38)

where we have assumed that ϵ(s)=0\epsilon(s)=0 for s>Ts>T. Therefore, the network with its NN different neurons splits up into two homogeneous populations: the first one comprises all neurons with ξiν=+1\xi_{i}^{\nu}=+1, i.e., those that should be ‘ON’ during retrieval of pattern ν\nu; and the second comprises all neurons with ξiν=0\xi_{i}^{\nu}=0, i.e., those that should be ‘OFF’ during retrieval of pattern ν\nu.

In other words, we can apply the mathematical tools of population dynamics that were presented in part III of this book so as to analyze memory retrieval in a network of NN different neurons.

Example: Spiking neurons without adaptation

In the absence of adaptation, the membrane potential depends only on the input potential and the time since the last spike. Thus, Eq. (17.33) reduces to

ui(t)=η(t-t^i)+hi(t)+urest\displaystyle u_{i}(t)=\eta(t-\hat{t}_{i})+h_{i}(t)+u_{\rm rest} (17.39)

where t^i\hat{t}_{i} denotes the last firing time of neuron ii and η(t-t^i)\eta(t-\hat{t}_{i}) summarizes the effect of refractoriness. Under the assumption of an initial overlap with pattern ν\nu and no overlap with other patterns, the input potential is given by Eq. (17.38). Thus, the network of NN splits into an ‘ON’ population with input potential

hON(t)=(1-b)0ϵ(s)mν(t-s)   d   sh^{\rm ON}(t)=(1-b)\,\int_{0}^{\infty}\epsilon(s)m^{\nu}(t-s)\,{\text{d}}s (17.40)

and an ‘OFF’ population with input potential

hOFF(t)=(-b)0ϵ(s)mν(t-s)   d   s.h^{\rm OFF}(t)=(-b)\,\int_{0}^{\infty}\epsilon(s)m^{\nu}(t-s)\,{\text{d}}s\,. (17.41)

For each of the populations, we can write down the integral equation of the population dynamics that we have seen in Chapter 14. For example, the ‘ON’-population evolves according to

AON(t)=-tPI(t|t^)A(t^)   d   t^A^{\rm ON}(t)=\int_{-\infty}^{t}P_{I}(t|\hat{t})\,A(\hat{t}){\text{d}}\hat{t}\, (17.42)


PI(t|t^)=ρ(t)exp[-t^tρ(t)   d   t],P_{I}(t|\hat{t})=\rho(t)\,\exp\left[-\int_{\hat{t}}^{t}\rho(t^{\prime})\,{% \text{d}}t^{\prime}\right]\,, (17.43)

where ρ(t)=f(η(t-t^)+hON(t)+urest-ϑ)\rho(t)=f(\eta(t-\hat{t})+h^{\rm ON}(t)+u_{\rm rest}-\vartheta). An analogous equation holds for the ‘OFF’-population.

Finally, we use Eq. (17.36) to close the system of equations. The sum over all neurons can be split into one sum over the ‘ON’-population and another one over the ‘OFF’-population, of size aNa\cdot N and (1-a)N(1-a)\cdot N, respectively. If the number NN of neurons is large, the overlap therefore is

mν(t)=12[AON(t)-AOFF(t)].m^{\nu}(t)={1\over 2}[A^{\rm ON}(t)-A^{\rm OFF}(t)]\,. (17.44)

Thus, the retrieval of pattern ν\nu is controlled by a small number of macroscopic equations.

In an analogous sequence of calculations one needs to check that the overlap with the other patterns μ\mu (with μν\mu\neq\nu) does not increase during retrieval of pattern ν\nu.

Fig. 17.11: Attractor network with spiking neurons. Memory retrieval in a network of 8000 excitatory neurons which stores 90 different patterns. Top: The spike raster shows 30 neurons selected and relabeld so that the first 5 neurons respond to pattern 1, the second group of 5 neurons to pattern 2, etc. Bottom: Overlap defined here as mμ*=AON(t)m^{\mu*}=A^{\rm ON}(t) with the first 6 patterns 1μ61\leq\mu\leq 6. After a partial cue (t=t=8,18.5, 19.5, 40, 51 s), one of the patterns is retrieved remains stable without further input during a delay period of 10 seconds. Occasionally a global input to the inhibitory neurons is given leading to a reset of the network (t=38t=38 s). After the reset, the network remains in the spontaneously activity state.

17.3.2 Excitatory and inhibitory neurons

Synaptic weights in the Hopfield model can take both positive and negative values. However, in the cortex, all connections originating from the same presynaptic neuron have the same sign, either excitatory or inhibitory. This experimental observation, called Dale’s law, gives rise to a primary classification of neurons as excitatory or inhibitory.

In Chapter 16 we started with models containing separate populations of excitatory and inhibitory neurons, but could show that the model dynamics are, under certain conditions, equivalent to an effective network where the excitatory populations excite themselves but inhibit each other. Thus explicit inhibition was replaced by an effective inhibition. Here we take the inverse approach and transform the effective mutual inhibition of neurons in the Hopfield network into an explicit inhibition via populations of inhibitory neurons.

In order to keep the arguments transparent, let us stick to discrete time and work with random patterns ξiμ{0,1}\xi_{i}^{\mu}\in\{0,1\} with mean activity (iξiμ)/N=a(\sum_{i}\xi_{i}^{\mu})/N=a. We take weights wij=cμ(ξiμ-b)(ξjμ-a)w_{ij}=c^{\prime}\sum_{\mu}(\xi_{i}^{\mu}-b)(\xi_{j}^{\mu}-a) and introduce a discrete-time spike variable σi=0.5(Si+1)\sigma_{i}=0.5(S_{i}+1) so that σi=1\sigma_{i}=1 can be interpreted as a spike and σi=0\sigma_{i}=0 as the quiescent state. Under the assumption that each pattern μ\mu has exactly aNa\cdot N entries with ξiμ=1\xi_{i}^{\mu}=1, we find that the input potential hi=jwijSjh_{i}=\sum_{j}w_{ij}S_{j} can be rewritten with the spike variable σ\sigma

hi(t)=2cjμ(ξiμ-b)ξjμσj-2cjμ(ξiμ-b)aσj.h_{i}(t)=2c^{\prime}\sum_{j}\sum_{\mu}(\xi_{i}^{\mu}-b)\,\xi_{j}^{\mu}\,\sigma% _{j}-2c^{\prime}\sum_{j}\sum_{\mu}(\xi_{i}^{\mu}-b)\,a\,\sigma_{j}\,. (17.45)

In the following we choose b=0b=0 and c=1/4Nc^{\prime}=1/4N. Then the first sum on the right-hand side of Eq. (17.45) describes excitatory and the second one inhibitory interactions.

In order to interpret the second term as arising from inhibitory neurons, we make the following assumptions. First, inhibitory neurons have a linear gain function and fire stochastically with probability

Prob{σk=+1|hkinh}=g(hkinh(t))Δt=γhkinh(t){\rm Prob}\{\sigma_{k}=+1|h_{k}^{\rm inh}\}=g(h_{k}^{\rm inh}(t))\Delta t=% \gamma\,h_{k}^{\rm inh}(t)\, (17.46)

where the constant γ\gamma takes care of the units and kk is the index of the inhibitory neuron with 1kNinh1\leq k\leq N^{\rm inh}. Second, each inhibitory neuron kk receives input from CC excitatory neurons. Connections are random and of equal weight wEI=1/Cw^{E\to I}=1/C. Thus, the input potential of neuron kk is hkinh=(1/C)jΓkσjh_{k}^{\rm inh}=(1/C)\sum_{j\in\Gamma_{k}}\sigma_{j} where Γk\Gamma_{k} is the set of presynaptic neurons. Third, the connection from an inhibitory neuron kk back to an excitatory neuron ii has weight

wikIE=aγNinhμξiμ.w^{I\to E}_{ik}={a\over\gamma\,N^{\rm inh}}\sum_{\mu}\xi_{i}^{\mu}\,. (17.47)

Thus, inhibitory weights onto a neuron ii which participates in many patterns are stronger than onto one which participates in only a few patterns. Fourth, the number NinhN^{\rm inh} of inhibitory neurons is large. Taken together, the four assumptions give rise to an average inhibitory feedback to each excitatory neuron proportional to jμξiμaσj\sum_{j}\sum_{\mu}\xi_{i}^{\mu}\,a\,\sigma_{j}. In other words, the inhibition caused by the inhibitory population is equivalent to the second term in Eq. (17.45).

Because of our choice b=0b=0, patterns are only in weak competition with each other and several patterns can become active at the same time. In order to also limit the total activity of the network, it is useful to add a second pool on inhibitory neurons which turn on whenever the total number of spikes in the network surpasses aNa\cdot N. Note that biological cortical tissue contains many different types of inhibitory interneurons which are thought to play different functional roles.

Fig. 17.11 shows that the above argument carry over to the case of integrate-and-fire neurons in continuous time. We emphasize that the network of 8000 excitatory and two groups inhibitory neurons (2000 neurons each) has stored 90 patterns of activity a0.1a\approx 0.1. Therefore each neuron participates in many patterns (111).

In practice, working memory models with spiking neurons require some parameter tuning. Adding to working models a mechanism of synaptic short-term facilitation (cf. Chapter 3) improves stability of memory retrieval during the delay period (350).