# 14.2 Recurrent Networks and Interacting Populations

The integral equation (14.5) derived in the previous section can be applied to interacting populations of connected neurons. In Section 14.2.1 we present the mathematical framework of the integral equations so as to describe several populations that interact with each other.

Using the methods discussed in Chapter 12, we can find the activity $A_{0}$ of a recurrent network in the regime of stationary asynchronous firing (Section 14.2.2). Using the linear response filter, that will be derived in Section 14.3, the stability of the solutions can further be analyzed for different levels of noise and arbitrary delays (Section 14.2.3).

# 14.2.1 Several populations and networks with self-interaction

In Section 14.1 we discussed a single homogeneous population of $N$ neurons. The formalism of integral equations introduced there can readily be adapted to several populations with recurrent interactions within and across populations.

We consider a coupled network of spiking neurons of the renewal type, such as nonlinear (or leaky) integrate-and-fire neurons with escape noise or a Spike Response Model SRM${}_{0}$. Neurons within a population have the same set of parameters whereas neurons of different populations can have different parameters. The activity of population $k$ is described by Eq. (14.5)

 $A_{k}(t)=\int_{-\infty}^{t}P_{I_{k}}(t|\hat{t})A_{k}(\hat{t}){\text{d}}\hat{t}\,$ (14.17)

where $P_{I_{k}}(t|\hat{t})$ is the input-dependent interval distribution of population $k$. The input to population $k$ is

 $I_{k}(t)=\sum_{n}N_{n}\,w_{kn}\int_{0}^{\infty}\alpha_{kn}(s)\,A_{n}(t-s)\,{% \text{d}}s+I_{k}^{\rm ext},.$ (14.18)

Here $N_{n}$ is the number of neurons in the presynaptic population $n$, $w_{kn}$ is the strength of a synaptic connection of a neuron in population $n$ to a neuron in population $k$ and $\alpha_{kn}$ the time course of the postsynaptic current into a neuron in population $k$, caused by spike firing of a neuron in population $n$. Conductance-based synapses are treated in the current-based approximation (cf. Section 13.6.3 in Ch. 13). Connections can be excitatory or inhibitory, depending on the choice of $w_{kn}$. Because the overall strength of the connection is incorporated in $w_{kn}$, we can, without loss of generality, assume a normalization $\int_{0}^{\infty}\alpha_{kn}(s){\text{d}}s=1$.

The noise level of each neuron in population $k$ is fixed to a value of $\sigma_{k}$. The choice of noise is arbitrary. If our aim is to mimic stochastic spike arrivals in randomly connected networks by an escape rate, a suitable choice of escape function has been given in Chapter 9. In practice this implies that noise in the input to a neuron is effectively described and replaced by escape noise in its output.

In the following we assume a fully connected network with interaction strength $w_{kn}=J_{kn}/N_{n}$; cf. Chapter 12. If the theory is applied to a random network where each neuron in population $k$ has a fixed number of presynaptic partners $C_{kn}$ in population $n$, we can use the same theory, except that (i) we use $w_{kn}=J_{kn}/C_{kn}$; and (ii) we increase the noise level $\sigma$ in the escape rate function so as to mimic the additional noise caused by stochastic spike arrival; see Section 9.4.

# 14.2.2 Stationary states and fixed points of activity

We are interested in the value $A_{k}(t)=A_{k,0}$ of the population activity in the stationary state of asynchronous firing. We recall from Ch. 12 that stationary activity means that the expected value of the population activity is constant whereas in a simulation the actual value $A(t)$ always fluctuates (Fig. 14.5). To lighten the notation, we consider a single population $k$ with self-interaction $w_{kk}=J_{0}/N_{k}$ and drop the index $k$ in the following. The input from other populations is summarized as a constant external input $I^{\rm ext}$. According the our assumptions, all neurons in the population have the same parameters and can be described by time-dependent renewal theory. The level of noise is indicated by an index $\sigma$.

A stationary state of asynchronous firing requires that the total input $I_{0}$ is constant (or at least stationary). In Chapter 12, we have seen that the population activity $A_{0}$ in the state of asynchronous firing is given by the single-neuron firing rate $\nu=g_{\sigma}(I_{0})$. We thus have

 $A_{0}=\nu=g_{\sigma}(I_{0})\,.$ (14.19)

Given constant activity $A_{0}$ of the population and constant external input $I_{0}^{\rm ext}$, the total input $I_{0}$ to each neuron is constant. From Eq. (14.18) we find the total input to a neuron in population $k$

 $I_{0}=I_{0}^{\rm ext}+J_{0}\int_{0}^{\infty}\alpha(s)\,A_{0}(t-s)\,{\text{d}}s% \,=I_{0}^{\rm ext}+J_{0}\,A_{0}$ (14.20)

Eq. (14.19) and (14.20) together yield the following equation for the population activity $A_{0}$

 $A_{0}=g_{\sigma}(J_{0}A_{0}+I_{0}^{\rm ext}).$ (14.21)

This result agrees with the general result found earlier in Chapter 12 for the stationary state in a network with self-coupling. Solutions can be found graphically (Fig. 14.6) using the same method as in Chapter 12.

The advantage of the integral equation approach is two-fold. First, for the integral-equations we have transparent mathematical tools to analyze the stability of the stationary solution, as shown in Section 14.2.3.

Second, we can write down the gain function $g_{\sigma}(I)$ and an expression for $A_{0}$ in a compact form, as will be shown now. Because the input is constant, the state of each neuron depends only on the time since the last spike $t-\hat{t}$. We are thus in the situation of stationary renewal theory. Therefore, the survivor function and the interval distribution cannot depend explicitly upon the absolute time, but only on the time difference $s=t-\hat{t}$. Hence we set

 $\displaystyle S_{I}(\hat{t}+s|\hat{t})$ $\displaystyle\rightarrow$ $\displaystyle S_{0}(s)$ (14.22) $\displaystyle P_{I}(\hat{t}+s|\hat{t})$ $\displaystyle\rightarrow$ $\displaystyle P_{0}(s).$ (14.23)

The value of the stationary activity $A_{0}$ follows now directly from the normalization Eq. (14.8),

 $1=A_{0}\int_{0}^{\infty}S_{0}(s)ds\,.$ (14.24)

We use $\frac{d}{ds}S_{0}(s)=-P_{0}(s)$ and integrate by parts

 $1=A_{0}\int_{0}^{\infty}sP_{0}(s)ds=A_{0}\langle T\rangle\,,$ (14.25)

where the last equality follows from the definition of the mean interspike interval (Chapter 9). Hence

 $A_{0}={1\over\int_{0}^{\infty}S_{0}(s){\text{d}}s}={1\over\langle T\rangle}=% \nu\,.$ (14.26)

The result has an intuitively appealing interpretation: if everything is constant, then averaging over time (for a single neuron) is the same as averaging over a population of identical neurons; cf. the discussion in Chapter 12.

Example: Population of SRM neurons with escape noise

Consider a population of SRM${}_{0}$ neurons with exponential escape noise and membrane potential

 $u(t)=\eta(t-\hat{t})+\int_{0}^{\infty}\kappa(s)\,I(t-s)\,{\text{d}}s=\eta(t-% \hat{t})+h(t)$ (14.27)

We assume that the input current is constant so that $h(t)=h_{0}=R\,I_{0}$ with $R=\int_{0}^{\infty}\kappa(s)\,{\text{d}}s$. With exponential escape noise, the hazard function (Chapter 9) is $\rho(t-\hat{t})=\rho_{0}\exp[\beta(h_{0}+\eta(t-\hat{t}))]$ and the gain function is given by Eq. (14.26)

 $A_{0}=\left[\int_{0}^{\infty}{\text{d}}x\,\exp\left(-\int_{0}^{x}\rho_{0}e^{% \beta[h_{0}+\eta(s)]}{\text{d}}s\right)\right]^{-1}=g_{\sigma}(I_{0})$ (14.28)

where $\sigma=1/\beta$ indicates the level of noise.

We now set $\beta=1$ and consider a specific choice of $\eta$ that includes absolute and relative refractoriness

 $\eta(t)=\left\{\begin{split}-\infty&&\;{\rm if}\;\;\;t<\Delta^{\rm abs}\\ \ln&\left[1-e^{-(t-\Delta^{\rm abs})/\tau}\right]\;&{\rm otherwise}.% \end{split}\right.$ (14.29)

With this choice of $\eta$ it is possible to calculate the gain function in terms of the incomplete gamma function $\gamma(a,x)=\int_{0}^{x}t^{a-1}e^{-t}dt$.

 $A_{0}=g_{\sigma}(I_{0})=\left(\Delta^{\rm abs}+\frac{\tau\gamma(r,r)}{{r}^{r}e% ^{-r}}\right)^{-1}$ (14.30)

where $r=\tau\rho_{0}e^{h_{0}}$ (see Exercises). The result is shown in Fig. 14.6A.

If the population of neurons is coupled to itself with synapses $w_{0}=J_{0}/N$, the stationary value of asynchronous activity can be found graphically by plotting in the same graph $A_{0}=g_{\sigma}(I_{0})$ and

 $A_{0}=[I_{0}-I_{0}^{\rm ext}]/J_{0}\,;$ (14.31)

which follows from Eq. (14.20); see Fig. 14.6B.

# 14.2.3 Oscillations and stability of the stationary state (*)

In the previous subsection we have assumed that the network is in a state of asynchronous firing. In this section, we study whether asynchronous firing can indeed be a stable state of a fully connected population of spiking neurons – or whether the connectivity drives the network toward oscillations. For the sake of simplicity, we restrict the analysis to SRM${}_{0}$ neurons; the same methods can, however, be applied to integrate-and-fire neurons or spiking neurons formulated in the framework of Generalized Linear Models.

For SRM${}_{0}$ neurons (cf. Chapter 9), the membrane potential is given by

 $u_{i}(t)=\eta(t-\hat{t}_{i})+h(t)$ (14.32)

where $\eta(t-\hat{t}_{i})$ is the effect of the last firing of neuron $i$ (i.e., the spike afterpotential) and $h(t)$ is the total postsynaptic potential caused by presynaptic firing. If all presynaptic spikes are generated within the homogeneous population under consideration, we have

 $h(t)=\sum_{j}w_{ij}\sum_{f}\epsilon_{0}(t-t_{j}^{(f)})=J_{0}\int_{0}^{\infty}% \epsilon_{0}(s)\,A(t-s)\,{\text{d}}s\,.$ (14.33)

Here $\epsilon_{0}(t-t_{j}^{(f)})$ is the time course of the postsynaptic potential generated by a spike of neuron $j$ at time $t_{j}^{(f)}$ and $w_{ij}=J_{0}/N$ is the strength of lateral coupling within the population. The second equality sign follows from the definition of the population activity, i.e., $A(t)=N^{-1}\sum_{j}\sum_{f}\delta(t-t_{j}^{(f)})$; cf. Chapter 12. For the sake of simplicity, we have assumed in Eq. (14.33) that there is no external input.

The state of asynchronous firing corresponds to a fixed point $A(t)=A_{0}$ of the population activity. We have already seen in the previous subsection as well as in Chapter 12 how the fixed point $A_{0}$ can be determined either numerically or graphically. To analyze its stability we assume that for $t>0$ the activity is subject to a small perturbation,

 $A(t)=A_{0}+A_{1}\,e^{i\omega t+\lambda t}$ (14.34)

with $A_{1}\ll A_{0}$. This perturbation in the activity induces a perturbation in the input potential,

 $h(t)=h_{0}+h_{1}\,e^{i\omega t+\lambda t}\,.$ (14.35)

The perturbation of the potential causes some neurons to fire earlier (when the change in $h$ is positive), and others to fire later (whenever the change is negative). The perturbation may therefore build up ($\lambda>0$, the asynchronous state is unstable) or decay back to zero ($\lambda<0$, the asynchronous state is stable). At the transition between the region of stability and instability the amplitude of the perturbation remains constant ($\lambda=0$, marginal stability of the asynchronous state). These transition points, defined by $\lambda=0$, are determined now.

We start from the population integral equation $A(t)=\int_{-\infty}^{t}P_{I}(t|\hat{t})\,A(\hat{t})\,{\text{d}}\hat{t}$ that has been introduced in Section 14.1. Here $P_{I}(t|\hat{t})$ is the input-dependent interval distribution, i.e., the probability density of emitting a spike at time $t$ given that the last spike occurred at time $\hat{t}$. The linearized response of the population activity to a small change $\Delta I$ in the input can, under general smoothness assumptions, always be written in the form

 $A(t)=A_{0}+\int_{0}^{\infty}G(s)\,\Delta I(t-s)\,{\text{d}}s\,$ (14.36)

where $G(s)$ is the linear response filter in the time domain. The Fourier transform $\hat{G}(\omega)$ is the frequency dependent gain function. The explicit form of the filter will be derived in the framework of the integral equations in Section 14.3.

Instead of thinking of a stimulation by an input current $\Delta I(t)$, it is more convenient to work with the input potential $h(t)=\int_{0}^{\infty}\kappa(s)I(t-s)\,{\text{d}}s$, because the neuron model equations have been defined on the level of the potential; cf. Eq. (14.32). We use $\Delta A(t)=A_{1}\,e^{i\omega t+\lambda t}$ and $\Delta h(t)=h_{1}\,e^{i\omega t+\lambda t}$ in Eq. (14.36) and search for the critical value $\lambda=0$ where the stable solution turns into an unstable one. After cancellation of a common factor $A_{1}\exp(i\omega t)$ the result can be written in the form

 $1=J_{0}\,{\hat{G}(\omega)\,\hat{\epsilon}(\omega)\over\hat{\kappa}(\omega)}=S_% {f}(\omega)\,\exp[i\,\Phi(\omega)]\,.$ (14.37)

Here, $\hat{\kappa}(\omega)$ and $\hat{\epsilon}$ are the Fourier transform of the membrane kernel $\kappa(s)$, and the time course of the postsynaptic potential $\epsilon_{0}(s)$ caused by an input spike, respectively. Typically, $\kappa(s)=(R/\tau_{m})\,\exp(-s/\tau_{m})$ where $\tau_{m}$ is the membrane time constant. If the synaptic input is a short current pulse of unit charge, $\epsilon_{0}$ and $\kappa$ are identical, but we would also like to include the case of synaptic input currents with arbitrary time dependence and therefore keep separate symbols for $\epsilon_{0}$ and $\kappa$. The second equality sign defines the real-valued functions $S_{f}(\omega)$ and $\Phi(\omega)$.

Equation (14.37) is thus equivalent to

 $S_{f}(\omega)=1\qquad\text{and}\qquad\Phi(\omega)\;{\rm mod}\;2\pi=0\,.$ (14.38)

Solutions of Eq. (14.38) yield bifurcation points where the asynchronous firing state looses its stability toward an oscillation with frequency $\omega$.

We have written Eq. (14.38) as a combination of two requirements, i.e., an amplitude condition $S_{f}(\omega)=1$ and a phase condition $\Phi(\omega)\;{\rm mod}\;2\pi=0$. Let us discuss the general structure of the two conditions. First, if $S_{f}(\omega)\leq 1$ for all frequencies $\omega$, an oscillatory perturbation cannot build up. All oscillations decay and the state of asynchronous firing is stable. We conclude from Eq. (14.37) that by increasing the absolute value $|J_{0}|$ of the coupling constant, it is always possible to increase $S_{f}(\omega)$. The amplitude condition can thus be met if the excitatory or inhibitory feedback from other neurons in the population is sufficiently strong. Second, for a bifurcation to occur we need in addition that the phase condition is met. Loosely speaking, the phase condition implies that the feedback from other neurons in the network must arrive just in time to keep the oscillation going. Thus the axonal signal transmission time and the rise time of the postsynaptic potential play a critical role during oscillatory activity (4; 181; 519; 523; 182; 78; 79; 183).

Example: Slow noise and phase diagram of instabilities

Let us apply the above results to leaky integrate-and-fire neurons with slow noise in the parameters. After each spike the neuron is reset to a value $u_{r}$ which is drawn from a Gaussian distribution with mean $\bar{u}_{r}$ and width $\sigma_{r}\ll[\vartheta-\bar{u}_{r}]$. After the reset, the membrane potential evolves deterministically according to

 $\tau_{m}{{\text{d}}u\over{\text{d}}t}=-u+R\,I(t)\,.$ (14.39)

The next firing occurs if the membrane potential hits the threshold $\vartheta$.

We assume that neurons are part of a large population $N\to\infty$ which is in a state of asynchronous firing with activity $A_{0}$. In this case, each neuron receives a constant input $I_{0}$. For constant input, a neuron which was reset to a value $\bar{u}_{r}$ will fire again after a period $T_{0}$. Because of the noisy reset with $\sigma_{r}\ll[\vartheta-\bar{u}_{r}]$, the interval distribution is approximately a Gaussian centered at $T_{0}$. We denote the standard deviation of the interval distribution by $\sigma$. The stationary population activity is simply $A_{0}=1/T_{0}$. The width of the Gaussian interval distribution $\sigma$ is linearly related to $\sigma_{r}$ with a proportionality factor that represents the (inverse of the) slope of the membrane potential at the firing threshold (183).

In order to analyze the stability of the stationary state, we have to specify the time course of the excitatory or inhibitory postsynaptic potential $\epsilon_{0}$. For the sake of simplicity we choose a delayed alpha function,

 $\epsilon_{0}(s)={s-\Delta^{\rm ax}\over\tau^{2}}\,\exp\left(-{s-\Delta^{\rm ax% }\over\tau}\right)\,{\mathcal{H}}(s-\Delta^{\rm ax})\,.$ (14.40)

The Fourier transform of $\epsilon$ has an amplitude $|\hat{\epsilon}(\omega)|=(1+\omega^{2}\,\tau^{2})^{-1}$ and a phase $|\psi(\omega)|=\omega\,\Delta^{\rm ax}+2\,{\rm arctan}(\omega\,\tau)$. Note that a change in the delay $\Delta^{\rm ax}$ affects only the phase of the Fourier transform and not the amplitude.

Figure 14.7 shows $S_{f}$ defined in the second equality sign of Eq. (14.37) as a function of $\omega\,T_{0}$. Since $S_{f}=1$ is a necessary condition for a bifurcation, it is apparent that bifurcations can occur only for frequencies $\omega\approx\omega_{n}=n\,{2\pi/T_{0}}$ with integer $n$ where $T_{0}=1/A_{0}$ is the typical inter-spike interval. We also see that higher harmonics are only relevant for low levels of noise. At a high noise level, however, the asynchronous state is stable even with respect to perturbations at $\omega\approx\omega_{1}$.

A bifurcation at $\omega\approx\omega_{1}$ implies that the period of the perturbation is identical to the firing period of individual neurons. Higher harmonics correspond to instabilities of the asynchronous state toward cluster states (195; 181; 143; 196; 79): each neuron fires with a mean period of $T_{0}$, but the population of neurons splits up in several groups that fire alternately so that the overall activity oscillates several times faster. In terms of the terminology introduced in Chapter 13, the network is in the synchronous regular (SR) state of fast oscillations.

Figure 14.7 illustrates the amplitude condition for the solution of Eq. (14.38). The numerical solutions of the full equation (14.38) for different values of the delay $\Delta^{\rm ax}$ and different levels of the noise $\sigma$ are shown in the bifurcation diagram of Fig. 14.8. The insets show simulations that illustrate the behavior of the network at certain combinations of transmission delay and noise level.

Let us consider for example a network with transmission delay $\Delta^{\rm ax}=2$ ms, corresponding to a $x$-value of $\Delta^{\rm ax}/T_{0}=0.25$ in Fig. 14.8. The phase diagram predicts that, at a noise level of $\sigma=0.5$ ms, the network is in a state of asynchronous firing. The simulation shown in the inset in the upper right-hand corner confirms that the activity fluctuates around a constant value of $A_{0}=1/T_{0}=0.125$ kHz.

If the noise level of the network is significantly reduced, the system crosses the short-dashed line. This line is the boundary at which the constant activity state becomes unstable with respect to an oscillation with $\omega\approx 3\,(2\pi/T_{0})$. Accordingly, a network simulation with a noise level of $\sigma=0.1$ exhibits an oscillation of the population activity with period $T^{\rm osc}\approx T_{0}/3\approx 2.6$ ms.

Keeping the noise level constant but reducing the transmission delay corresponds to a horizontal move across the phase diagram in Fig. 14.8. At some point, the system crosses the solid line that marks the transition to an instability with frequency $\omega_{1}=2\pi/T_{0}$. Again, this is confirmed by a simulation shown in the inset in the upper left corner. If we now decrease the noise level, the oscillation becomes even more pronounced (bottom left).

In the limit of low noise, the asynchronous network state is unstable for virtually all values of the delay. The region of the phase diagram in Fig. 14.8 around $\Delta^{\rm ax}/T_{0}\approx 0.1$ which looks stable hides instabilities with respect to the higher harmonics $\omega_{6}$ and $\omega_{5}$ which are not shown. We emphasize that the specific location of the stability borders depends on the form of the postsynaptic response function $\epsilon$. The qualitative features of the phase diagram in Fig. 14.8 are generic and hold for all kinds of response kernels.

What happens if the excitatory interaction is replaced by inhibitory coupling? A change in the sign of the interaction corresponds to a phase shift of $\pi$. For each harmonic, the region along the delay axis where the asynchronous state is unstable for excitatory coupling (cf. Fig. 14.8) becomes stable for inhibition and vice versa. In other words, we simply have to shift the instability tongues for each harmonic frequency $\omega_{n}=2n\pi/T_{0}$ horizontally by half the period of the harmonic, i.e. $\Delta/T_{0}=1/(2n)$. Apart from that the pattern remains the same.