# 14.6 Heterogeneity and Finite Size

Neuronal populations in biology are neither completely homogeneous nor infinitely large. In order to treat heterogeneity in local neuronal parameters, the variability of a parameter between one neuron and the next is often replaced by slow noise in the parameters. For example, a population of integrate-and-fire neurons where the reset value $u_{r}$ is different for each neuron is replaced by a population where the reset values are randomly chosen after each firing (and not only once at the beginning). Such a model of slow noise in the parameters has been discussed in the example of Section 14.3. The replacement of heterogeneity by slow noise neglects, however, correlations that would be present in a truly heterogeneous model. To replace a heterogeneous model by a noisy version of a homogeneous model is somewhat ad hoc, but common practice in the literature.

The second question is whether we relax the condition of a large network. For $N\to\infty$ the population activity shows no fluctuations and this fact has been used for the derivation of the population equation. For systems of finite size fluctuations are important since they limit the amount of information that can be transmitted by the population activity. For a population without internal coupling ($J_{0}=0$), fluctuations can be calculated directly from the interval distribution $P_{I}(t\,|\hat{t})$ if the population consists of neurons that can be described by renewal theory; cf. Chapter 9. For networks with recurrent connections, several attempts toward a description of the fluctuations have been made (491; 341; 299). Here we present a different approach.

If we consider a network with a finite number $N$ of neurons, the integral equation (14.5) which describes the evolution of the population activity $A(t)$ in terms of the input-dependent interval distribution $P_{I}(t|\hat{t})$ should be written more carefully with expectations signs,

 $\langle A(t)\rangle=\int_{-\infty}^{t}P_{I}(t|\hat{t})A(\hat{t}){\text{d}}\hat% {t}\,.$ (14.97)

so as to emphasize that the left-hand side is the expected population activity at time $t$, given the observed population activity at earlier times $\hat{t}$. In other words $N\,\langle A(t)\rangle\,\Delta t=N\langle m_{0}(t)\rangle$ is the expected number of spikes to occur in a a short interval $\Delta t$. Here we have defined $m_{0}(t)$ as the fraction of neurons that fire in a time step $\Delta t$, just as in the previous section. Given the past input for $t^{\prime} (which is the same for all the $N$ neurons in the group), the firing of the neurons is independent in the next time step (’conditional independence’). Therefore in the limit of $N\to\infty$ the observed variable $m_{0}(t)$ approaches $\langle m_{0}(t)\rangle$ and we can drop the expectation signs.

For finite $N$, the variable $m_{0}(t)$ fluctuates around $\langle m_{0}(t)\rangle$. In order to determine these fluctuations, we assume that $N$ is large, but finite. For finite $N$ the population activity $A(t)$ can be written in the form of a ‘noisy’ integral equation

 $A(t)=\langle A(t)\rangle+\sigma(t)\xi(t)=\int_{-\infty}^{t}\rho^{\rm noise}(t|% \hat{t})\,S_{I}^{\rm noise}(t|\hat{t})\,A(\hat{t}){\text{d}}\hat{t}$ (14.98)

where $\xi(t)$ is a Gaussian white noise, $A(\hat{t})$ is the observed activity in the past, $S_{I}^{\rm noise}(t|\hat{t})$ is the fraction of neurons that have survived up to time $t$ after a last spike at time $\hat{t}$, and $\rho(t|\hat{t})^{\rm noise}$ is the stochastic intensity of that group of neurons. Starting from discrete time steps, and then taking the continuum limit, it is possible to determine the amplitude of the fluctuations as $\sigma(t)=\sqrt{\langle A(t)\rangle/N}$. Eq. 14.98 can be used to evaluate the correlations $\langle A(t)\,A(t^{\prime})\rangle$, in coupled networks of finite size (122).

# Finite number of neurons (*)

For the development of the arguments, it is convenient to work in discrete time. We use the formalism developed in Section 14.1.5. We introduce the variable $m_{k}^{N}(t)=N\,m_{k}^{(t)}$ to denote the number of neurons that have fired in the interval $[t-k\,\Delta t,t-(k-1)\,\Delta t]$ and have ‘survived’ up to time $t$ without firing again. With this definition, $m_{0}^{N}(t)=N\,A(t)\,\Delta t$ denotes the number of neurons that fire in the time step from $t$ to $t+\Delta t$.

We start with the normalization condition in the quasi-renewal equivalent of Eq. (14.8) and multiply both sides with the number of neurons

 $N=\int_{-\infty}^{t}S_{I,A}(t|\hat{t})\,N\,A(\hat{t}){\text{d}}\hat{t}.$ (14.99)

This normalization must hold at any moment in time, therefore

 $m_{0}^{N}(t)=N-\sum_{k=1}^{K}m_{k}^{N}(t)\,,$ (14.100)

where $K$ is chosen big enough so that all neurons have fired at least once in the last $K$ time bins.

In order to determine the value of $m_{k}^{N}(t)$ for $k\geq 2$, we focus on the group of neurons that has fired at time $\hat{t}\approx t-(k-1)\,\Delta t$. The number of neurons that have ‘survived’ up to time $t-\Delta t$ without emitting a further spike is $m^{N}_{k-1}(t-\Delta t)$. In the time step starting at time $t$, all of these neurons have the same stochastic intensity $\rho(t|\hat{t})$ and fire independently with probability $p_{F}(t|\hat{t})=1-\exp[-\rho(t|\hat{t})\,\Delta t]$. In a finite-$N$ discrete-time update scheme, the actual number of neurons $n_{k}(t)$ of neurons that fire in time step $t$ is therefore drawn from the binomial distribution

 $P(n_{k})=\left({[m_{k-1}^{N}(t-\Delta t)]!\over[n_{k}]!\,[m_{k-1}^{N}(t-\Delta t% )-n_{k}]!}\right)[p_{F}(t|\hat{t})]^{n_{k}}\,[1-p_{F}(t|\hat{t})]^{m^{N}_{k-1}% (t-\Delta t)-n_{k}}$ (14.101)

In the time step starting at time $t$, the number of neurons that have last fired at $\hat{t}$ is therefore (for $k\geq 2$)

 $m_{k}^{N}(t)=m_{k-1}^{N}(t-\Delta t)-n_{k}(t)\,.$ (14.102)

Because of the shifting time frame used for the index $k$, neurons that are at time $t-\Delta t$ in group $(k-1)$ will be at time $t$ in group $k$, except those who fired in the previous time step - and this expressed Eq. (14.102). Note that $m_{k}^{N}(t)$ is the actual number of neurons remaining in the group of neurons that fired the last spike at $\hat{t}$. Its expected value is

 $\langle m_{k}^{N}(t)\rangle=m_{k-1}^{N}(t-\Delta t)\,\exp\left[-\rho(t|t-k\,% \Delta t)\,\Delta t\right]\qquad{\rm for}~{}k>1.$ (14.103)

as already discussed in Eq. (14.16). In the $N\to\infty$ limit, the actual value will approach the expectation value, but for finite $N$ the actual value fluctuates. The finite-$N$ update scheme in discrete time is given by the iteration of Eqs. (14.102) and (14.100).

In order to arrive at an equation in continuous time, two further steps are needed. First, the binomial distribution in Eq. (14.101) is approximated by a Gaussian distribution with the same mean and variance. Second, we take the limit of $\Delta t$ to zero and keep track of terms to order $1/N$ but not $1/N^{2},1/N^{3}\dots$. The result is Eq. (14.98). Note that for an uncoupled network of $N$ neurons in the stationary case, fluctuations can also be directly calculated from the interval distribution as discussed in Chapter 7. The advantage of the approach presented here is that it works also for coupled networks.