14 The Integral-equation Approach

14.6 Heterogeneity and Finite Size

Neuronal populations in biology are neither completely homogeneous nor infinitely large. In order to treat heterogeneity in local neuronal parameters, the variability of a parameter between one neuron and the next is often replaced by slow noise in the parameters. For example, a population of integrate-and-fire neurons where the reset value uru_{r} is different for each neuron is replaced by a population where the reset values are randomly chosen after each firing (and not only once at the beginning). Such a model of slow noise in the parameters has been discussed in the example of Section 14.3. The replacement of heterogeneity by slow noise neglects, however, correlations that would be present in a truly heterogeneous model. To replace a heterogeneous model by a noisy version of a homogeneous model is somewhat ad hoc, but common practice in the literature.

The second question is whether we relax the condition of a large network. For NN\to\infty the population activity shows no fluctuations and this fact has been used for the derivation of the population equation. For systems of finite size fluctuations are important since they limit the amount of information that can be transmitted by the population activity. For a population without internal coupling (J0=0J_{0}=0), fluctuations can be calculated directly from the interval distribution PI(t|t^)P_{I}(t\,|\hat{t}) if the population consists of neurons that can be described by renewal theory; cf. Chapter 9. For networks with recurrent connections, several attempts toward a description of the fluctuations have been made (491; 341; 299). Here we present a different approach.

If we consider a network with a finite number NN of neurons, the integral equation (14.5) which describes the evolution of the population activity A(t)A(t) in terms of the input-dependent interval distribution PI(t|t^)P_{I}(t|\hat{t}) should be written more carefully with expectations signs,

A(t)=-tPI(t|t^)A(t^)dt^.\langle A(t)\rangle=\int_{-\infty}^{t}P_{I}(t|\hat{t})A(\hat{t}){\text{d}}\hat% {t}\,. (14.97)

so as to emphasize that the left-hand side is the expected population activity at time tt, given the observed population activity at earlier times t^\hat{t}. In other words NA(t)Δt=Nm0(t)N\,\langle A(t)\rangle\,\Delta t=N\langle m_{0}(t)\rangle is the expected number of spikes to occur in a a short interval Δt\Delta t. Here we have defined m0(t)m_{0}(t) as the fraction of neurons that fire in a time step Δt\Delta t, just as in the previous section. Given the past input for t<tt^{\prime}<t (which is the same for all the NN neurons in the group), the firing of the neurons is independent in the next time step (’conditional independence’). Therefore in the limit of NN\to\infty the observed variable m0(t)m_{0}(t) approaches m0(t)\langle m_{0}(t)\rangle and we can drop the expectation signs.

For finite NN, the variable m0(t)m_{0}(t) fluctuates around m0(t)\langle m_{0}(t)\rangle. In order to determine these fluctuations, we assume that NN is large, but finite. For finite NN the population activity A(t)A(t) can be written in the form of a ‘noisy’ integral equation

A(t)=A(t)+σ(t)ξ(t)=-tρnoise(t|t^)SInoise(t|t^)A(t^)dt^A(t)=\langle A(t)\rangle+\sigma(t)\xi(t)=\int_{-\infty}^{t}\rho^{\rm noise}(t|% \hat{t})\,S_{I}^{\rm noise}(t|\hat{t})\,A(\hat{t}){\text{d}}\hat{t} (14.98)

where ξ(t)\xi(t) is a Gaussian white noise, A(t^)A(\hat{t}) is the observed activity in the past, SInoise(t|t^)S_{I}^{\rm noise}(t|\hat{t}) is the fraction of neurons that have survived up to time tt after a last spike at time t^\hat{t}, and ρ(t|t^)noise\rho(t|\hat{t})^{\rm noise} is the stochastic intensity of that group of neurons. Starting from discrete time steps, and then taking the continuum limit, it is possible to determine the amplitude of the fluctuations as σ(t)=A(t)/N\sigma(t)=\sqrt{\langle A(t)\rangle/N}. Eq. 14.98 can be used to evaluate the correlations A(t)A(t)\langle A(t)\,A(t^{\prime})\rangle, in coupled networks of finite size (122).

Finite number of neurons (*)

For the development of the arguments, it is convenient to work in discrete time. We use the formalism developed in Section 14.1.5. We introduce the variable mkN(t)=Nmk(t)m_{k}^{N}(t)=N\,m_{k}^{(t)} to denote the number of neurons that have fired in the interval [t-kΔt,t-(k-1)Δt][t-k\,\Delta t,t-(k-1)\,\Delta t] and have ‘survived’ up to time tt without firing again. With this definition, m0N(t)=NA(t)Δtm_{0}^{N}(t)=N\,A(t)\,\Delta t denotes the number of neurons that fire in the time step from tt to t+Δtt+\Delta t.

We start with the normalization condition in the quasi-renewal equivalent of Eq. (14.8) and multiply both sides with the number of neurons

N=-tSI,A(t|t^)NA(t^)dt^.N=\int_{-\infty}^{t}S_{I,A}(t|\hat{t})\,N\,A(\hat{t}){\text{d}}\hat{t}. (14.99)

This normalization must hold at any moment in time, therefore

m0N(t)=N-k=1KmkN(t),m_{0}^{N}(t)=N-\sum_{k=1}^{K}m_{k}^{N}(t)\,, (14.100)

where KK is chosen big enough so that all neurons have fired at least once in the last KK time bins.

In order to determine the value of mkN(t)m_{k}^{N}(t) for k2k\geq 2, we focus on the group of neurons that has fired at time t^t-(k-1)Δt\hat{t}\approx t-(k-1)\,\Delta t. The number of neurons that have ‘survived’ up to time t-Δtt-\Delta t without emitting a further spike is mk-1N(t-Δt)m^{N}_{k-1}(t-\Delta t). In the time step starting at time tt, all of these neurons have the same stochastic intensity ρ(t|t^)\rho(t|\hat{t}) and fire independently with probability pF(t|t^)=1-exp[-ρ(t|t^)Δt]p_{F}(t|\hat{t})=1-\exp[-\rho(t|\hat{t})\,\Delta t]. In a finite-NN discrete-time update scheme, the actual number of neurons nk(t)n_{k}(t) of neurons that fire in time step tt is therefore drawn from the binomial distribution

P(nk)=([mk-1N(t-Δt)]![nk]![mk-1N(t-Δt)-nk]!)[pF(t|t^)]nk[1-pF(t|t^)]mk-1N(t-Δt)-nkP(n_{k})=\left({[m_{k-1}^{N}(t-\Delta t)]!\over[n_{k}]!\,[m_{k-1}^{N}(t-\Delta t% )-n_{k}]!}\right)[p_{F}(t|\hat{t})]^{n_{k}}\,[1-p_{F}(t|\hat{t})]^{m^{N}_{k-1}% (t-\Delta t)-n_{k}} (14.101)

In the time step starting at time tt, the number of neurons that have last fired at t^\hat{t} is therefore (for k2k\geq 2)

mkN(t)=mk-1N(t-Δt)-nk(t).m_{k}^{N}(t)=m_{k-1}^{N}(t-\Delta t)-n_{k}(t)\,. (14.102)

Because of the shifting time frame used for the index kk, neurons that are at time t-Δtt-\Delta t in group (k-1)(k-1) will be at time tt in group kk, except those who fired in the previous time step - and this expressed Eq. (14.102). Note that mkN(t)m_{k}^{N}(t) is the actual number of neurons remaining in the group of neurons that fired the last spike at t^\hat{t}. Its expected value is

mkN(t)=mk-1N(t-Δt)exp[-ρ(t|t-kΔt)Δt]  fork>1.\langle m_{k}^{N}(t)\rangle=m_{k-1}^{N}(t-\Delta t)\,\exp\left[-\rho(t|t-k\,% \Delta t)\,\Delta t\right]\qquad{\rm for}~{}k>1. (14.103)

as already discussed in Eq. (14.16). In the NN\to\infty limit, the actual value will approach the expectation value, but for finite NN the actual value fluctuates. The finite-NN update scheme in discrete time is given by the iteration of Eqs. (14.102) and (14.100).

In order to arrive at an equation in continuous time, two further steps are needed. First, the binomial distribution in Eq. (14.101) is approximated by a Gaussian distribution with the same mean and variance. Second, we take the limit of Δt\Delta t to zero and keep track of terms to order 1/N1/N but not 1/N2,1/N31/N^{2},1/N^{3}\dots. The result is Eq. (14.98). Note that for an uncoupled network of NN neurons in the stationary case, fluctuations can also be directly calculated from the interval distribution as discussed in Chapter 7. The advantage of the approach presented here is that it works also for coupled networks.