14 The Integral-equation Approach

14.1 Population activity equations

The interval distribution PI(t|t^)P_{I}(t|\hat{t}) which has already been introduced in Chapter 7 plays a central role in the formulation of the population equation. Before we formulate the evolution of the population activity A(t)A(t) in terms of this interval distribution PI(t|t^)P_{I}(t|\hat{t}), we specify some necessary assumptions. Two key assumptions are that the population is homogeneous and contains non-adaptive neurons so that we can work in the framework of a time-dependent renewal theory. These assumptions will eventually be relaxed in Section 14.1.4 where we extend the theory to the case of a population of adaptive neurons and in Section 14.6 where we treat populations of finite size.

14.1.1 Assumptions of time-dependent renewal theory

In order to formulate a first version of an integral equation for a population of neurons, we start in the framework of time-dependent renewal theory (Chapter 7). To do so, we have to assume the state of a neuron ii at time tt to be completely described by (i) its last firing time t^i\hat{t}_{i}; (ii) the input I(t)I(t^{\prime}) it received for times t<tt^{\prime}<t; (iii) the characteristics of potential noise sources, be it noise in the input, noise in the neuronal parameters, or noise in the output.

Are these assumptions overly restrictive or can we still place an interesting and rich set of neuron models in the class of models consistent with assumptions (i) - (iii) of time-dependent renewal theory?

The state of a single-variable integrate-and-fire neuron ii, for example one of the nonlinear integrate-and-fire models of Chapter 5, is completely characterized by its momentary membrane potential ui(t)u_{i}(t). After firing at a time t^i\hat{t}_{i}, the membrane potential is reset, so that all information that arrived before the firing is ‘forgotten’. Integration continues with u=uru=u_{r}. Therefore knowledge of last firing time t^i\hat{t}_{i} and the input current I(t)I(t^{\prime}) for t^i<tt\hat{t}_{i}<t^{\prime}\leq t is sufficient to predict the momentary state of neuron ii at time tt and therefore (for a deterministic model) its next firing time. If the integrate-and-fire model is furthermore subject to noise in the form of stochastic spike arrivals of known rate (but unknown spike arrival times), we will not be able to predict the neuron’s exact firing time. Instead we are interested in the probability density

PI(t|t^i)P_{I}(t|\hat{t}_{i}) (14.1)

that the next spike occurs around time tt given that the last spike was at time t^i\hat{t}_{i} and the neuron was subject to an input I(t)I(t^{\prime}) for t<tt^{\prime}<t. In practice, it might be difficult to write down an exact analytical formula for PI(t|t^i)P_{I}(t|\hat{t}_{i}) for noise models corresponding to stochastic spike arrival or diffusive noise (cf. Chapter 8), but the quantity PI(t|t^i)P_{I}(t|\hat{t}_{i}) is still well defined. We call PI(t|t^i)P_{I}(t|\hat{t}_{i}) the generalized interval distribution (cf. Chapter 9).

For several other noise models, there exist explicit mathematical expressions for PI(t|t^i)P_{I}(t|\hat{t}_{i}). Consider, for example, integrate-and-fire models with slow noise in the parameters, defined as follows. After each firing time, the value uru_{r} for the reset or the value τm\tau_{m} of the membrane time constant is drawn from a predefined distribution. Between two resets, the membrane potential evolves deterministically. As a consequence, the distribution PI(t|t^i)P_{I}(t|\hat{t}_{i}) of the next firing time can be predicted from the distribution of parameters and the deterministic solution of the threshold crossing equations (183). Such a model of slow noise in the parameters can also be considered as an approximation to a heterogeneous population of neurons where different neurons in the population have slightly different, but fixed parameters.

Another tractable noise model is the ‘escape noise’ model which is also the basis for the Generalized Linear Models of spiking neurons already discussed in Chapter 9. The short-term memory approximation SRM0{}_{0} of the Spike Response Model with escape noise (cf. Eq. (9.19) in Ch. 9) is a prime example of a model that fits into the framework of renewal theory specified by assumptions (i) - (iii).

The escape noise model can be used in combination with any linear or nonlinear integrate-and-fire model. For a suitable choice of the escape function ff it provides also an excellent approximation to diffusive noise in the input; cf. Section 9.4 in Ch. 9. An example of how to formulate a nonlinear integrate-and-fire model with escape noise is given below.

The major limitation of time-dependent renewal theory is that, for the moment, we need to exclude adaptation effects, because the momentary state of adaptive neurons not only on the last firing time (and the input in the past) but also on the firing times of earlier spikes. Section 14.1.4 shows that adaptation can be included by extending the renewal theory to a quasi-renewal theory. The treatment of adaptive neurons is important because most cortical neurons exhibit adaptation (Ch. 6).

Example: Escape noise in integrate-and-fire models

Consider an arbitrary (linear or nonlinear) integrate-and-fire model with refractoriness. The neuron has fired its last spike at t^\hat{t} and enters thereafter an absolute refractory period of time Δabs\Delta^{\rm abs}. Integration of the differential equation of the membrane potential uu,

τ   d      d   tu=f(u)+R(u)I(t).\tau{{\text{d}}\over{\text{d}}t}u=f(u)+R(u)\,I(t)\,. (14.2)

restarts at time t^+Δabs\hat{t}+\Delta^{\rm abs} with initial condition uru_{r}. The input current I(t)I(t) can have an arbitrary temporal structure.

In the absence of noise, the neuron would emit its next spike at the moment when the membrane potential reaches the numerical threshold θreset\theta^{\rm reset}. In the presence of escape noise, the neuron fires in each short time interval Δt0\Delta t\to 0, a spike with probability PF(t;t+Δt)=ρ(t)ΔtP_{F}(t;t+\Delta t)=\rho(t)\Delta t where

ρ(t)=f(u(t)-θreset,u˙(t))\rho(t)=f(u(t)-\theta^{\rm reset},\dot{u}(t)) (14.3)

is the escape rate which typically depends on the distance between the membrane potential and the threshold, and potentially also on the derivative u˙=(du/dt)\dot{u}=(du/dt) of the membrane potential; cf. Section 9.4 in Ch. 9.

Knowledge of the input I(t)I(t^{\prime}) for t>t^+Δabst^{\prime}>\hat{t}+\Delta^{\rm abs} is sufficient to calculate the membrane potential u(t)u(t) by integration of Eq. (14.2). The time course of u(t)u(t) is inserted into Eq. (14.3) to get the instantaneous rate or firing ‘hazard’ ρ(t)\rho(t) for all t>t^+Δabst>\hat{t}+\Delta^{\rm abs}. By definition, the instantaneous rate vanishes during the absolute refractory time: ρ(t)=0\rho(t)=0 for t^<t<t^+Δabs\hat{t}<t<\hat{t}+\Delta^{\rm abs}. With these notations, we can use the results of Chapter 7 to arrive at the generalized interval distribution

PI(t|t^)=ρ(t)exp[-t^tρ(t)   d   t].P_{I}(t|\hat{t})=\rho(t)\,\exp\left[-\int_{\hat{t}}^{t}\rho(t^{\prime})\,{% \text{d}}t^{\prime}\right]\,. (14.4)

Note that, in order to keep notation light, we simply write ρ(t)\rho(t) instead of the more explicit ρ(t|t^,I(t))\rho(t|\hat{t},I(t^{\prime})) which would emphasize that the hazard depends on the last firing time t^\hat{t} and the input I(t)I(t^{\prime}) for t^<t<t\hat{t}<t^{\prime}<t; for the derivation of Eq. 14.4, see also Eq. (7.28) in Ch. 7.

14.1.2 Integral equations for non-adaptive neurons

The integral equation (182; 183; 552) for activity dynamics with time-dependent renewal theory states that the activity at time tt depends on the fraction of active neurons at earlier times t^\hat{t} times the probability of observing a spike at tt given a spike at t^\hat{t}:

A(t)=-tPI(t|t^)A(t^)dt^.A(t)=\int_{-\infty}^{t}P_{I}(t|\hat{t})A(\hat{t}){\text{d}}\hat{t}\,. (14.5)

Equation (14.5) is easy to understand. The kernel PI(t|t^)P_{I}(t|\hat{t}) is the probability density that the next spike of a neuron, which is under the influence of an input II, occurs at time tt given that its last spike was at t^\hat{t}. The number of neurons which have fired at t^\hat{t} is proportional to A(t^)A(\hat{t}) and the integral runs over all times in the past. The interval distribution PI(t|t^)P_{I}(t|\hat{t}) depends on the total input (both the external input and the synaptic input from other neurons in the population). For an unconnected population, I(t)I(t) corresponds to the external drive. For connected neurons, I(t)I(t) is the sum of the external input and the recurrent input from other neurons in the population; the case of connected neurons will be further analyzed in Section 14.2.

We emphasize that A(t)A(t) on the left-hand side of Eq. (14.5) is the expected activity at time tt while A(t^)A(\hat{t}) on the right-hand side is the observed activity in the past. In the limit of the number NN of neurons in the population going to infinity, the fraction of neurons that actually fire in a short time Δt\Delta t is the same as its expected value A(t)ΔtA(t)\,\Delta t. Therefore, Eq. (14.5) becomes exact in the limit of NN\to\infty, so that we can use the same symbol for A(t)A(t) on both sides of the equation. Finite-size effects are discussed in Section 14.6.

We conclude this section with some final remarks on the form of Eq. (14.5). First, we observe that we can multiply the activity value AA on both sides of the equation with a constant cc and introduce a new variable A=cAA^{\prime}=c\,A. If A(t)A(t) solves the equation, then A(t)A^{\prime}(t) will solve it as well. Thus, Eq. (14.5) cannot predict the correct normalization of the activity AA. This reflects the fact that, instead of defining the activity by a spike count divided by the number NN of neurons, we could have chosen to work directly with the spike count per unit of time or any other normalization. The proper normalization consistent with our definition of the population activity A(t)A(t) is derived below in Section 14.1.3.

Second, even though Eq. (14.5) is linear in the variable AA, it is in fact a highly nonlinear equation in the drive, because the kernel PI(t|t^)P_{I}(t|\hat{t}) depends nonlinearly on the input I(t)I(t). However, numerical implementations of the integral-equations lead to rapid schemes that predict the population activity in response to changing input, even in the highly nonlinear regime when strong input transiently synchronizes neurons in the population (Fig. 14.1).

Example: Leaky integrate-and-fire neurons with escape noise

Using appropriate numerical schemes, the integral equation can be used to predict the activity in a homogeneous population of integrate-and-fire neurons subject to a time-dependent input. In each time step, the fraction of neurons that fire is calculated as A(t)ΔtA(t)\Delta t using (14.5). The value of A(t)A(t) then becomes part of the observed history and we can evaluate the fraction of neurons in the next time step t+Δtt+\Delta t. For the sake of the implementation, the integral over the past can be truncated at some suitable lower bound (see Section 14.1.5). The numerical integration of Eq. (14.5) predicts well the activity pattern observed in a simulation of 4000 independent leaky integrate-and-fire neurons with exponential escape noise (Fig. 14.1).

Fig. 14.1: Population activity A(t)A(t) (top) in a population of leaky integrate-and-fire neurons with escape noise in response to a time-dependent input (bottom). After a strong step, neurons in the population synchronize which leads to a transient oscillation. The numerical solution of the integral equation is compared with a simulation of 4000 neurons.

14.1.3 Normalization and derivation of the integral equation

To derive Eq. (14.5), we recall that PI(t|t^)P_{I}(t|\hat{t}) is the probability density that a neuron fires at time tt given its last spike at t^\hat{t} and an input I(t)I(t^{\prime}) for t<tt^{\prime}<t. Integration of the probability density over time t^tPI(s|t^)ds\int_{\hat{t}}^{t}P_{I}(s|\hat{t})ds gives the probability that a neuron which has fired at t^\hat{t} fires its next spike at some arbitrary time between t^\hat{t} and tt. Just as in Chapter 7, we can define a survival probability,

SI(t|t^)=1-t^tPI(s|t^)ds,S_{I}(t|\hat{t})=1-\int_{\hat{t}}^{t}P_{I}(s|\hat{t}){\text{d}}s, (14.6)

i.e., the probability that a neuron which has fired its last spike at t^\hat{t} “survives” without firing up to time tt.

We now return to the homogeneous population of neurons in the limit of NN\rightarrow\infty and assume that the firing of different neurons at time tt is independent, given that we know the history (the input and last spike) of each neuron. The technical term for such a situation is ’conditional independence’. We consider the network state at time tt and label all neurons by their last firing time t^\hat{t}. The proportion of neurons at time tt which have fired their last spike between t0t_{0} and t1<tt_{1}<t (and have not fired since) is expected to be

number of neurons at t with last spike in [t0,t1]total number of neurons=t0t1SI(t|t^)A(t^)dt^.\left\langle\frac{\text{number of neurons at }t\text{ with last spike in }[t_{% 0},t_{1}]}{\text{total number of neurons}}\right\rangle=\int_{t_{0}}^{t_{1}}S_% {I}(t|\hat{t})A(\hat{t}){\text{d}}\hat{t}. (14.7)

For an interpretation of the integral on the right-hand side of Eq. (14.7), we recall that A(t^)dt^A(\hat{t})d\hat{t} is the fraction of neurons that have fired in the interval [t^,t^+Δt^][\hat{t},\hat{t}+\Delta\hat{t}]. Of these a fraction SI(t|t^)S_{I}(t|\hat{t}) are expected to survive from t^\hat{t} to tt without firing. Thus among the neurons that we observe at time tt the proportion of neurons that have fired their last spike between t0t_{0} and t1t_{1} is expected to be t0t1SI(t|t^)A(t^)dt^\int_{t_{0}}^{t_{1}}S_{I}(t|\hat{t})A(\hat{t})d\hat{t}; cf. Fig. 14.2.

Fig. 14.2: Derivation of the population equation in discretized time. Of the NA(t^)Δt^NA(\hat{t})\Delta\hat{t} neurons that have fired between t^\hat{t} and t^+Δt^\hat{t}+\Delta\hat{t}, a fraction SI(t|t^)S_{I}(t|\hat{t}) is expected to “survive” up to time tt without firing another spike. Thus (with t1=t^0+kmaxΔt^t_{1}=\hat{t}_{0}+k^{\text{max}}\Delta\hat{t}) the Riemann sum k=0kmaxS(t|t0+kΔt^)A(t0+kΔt^)Δt^t0t1SI(t|t^)A(t^)dt^\sum_{k=0}^{k^{\text{max}}}S(t|{t}_{0}+k\Delta\hat{t})\,A({t}_{0}+k\Delta\hat{% t})\Delta\hat{t}\approx\int_{t_{0}}^{t_{1}}S_{I}(t|\hat{t})A(\hat{t})\,d\hat{t} gives the expected fraction of neurons at time tt that have fired their last spike between t0{t}_{0} and t1{t}_{1}.

Finally, we use the fact that the total number of neurons remains constant. All neurons have fired at some point in the past.88Neurons which have never fired before are assigned a formal firing time t^=-\hat{t}=-\infty. Thus, if we extend the lower bound t0t_{0} of the integral on the right-hand side of Eq. 14.7 to --\infty and the upper bound to tt, the left-hand side becomes equal to one,

1=-tSI(t|t^)A(t^)dt^,1=\int_{-\infty}^{t}S_{I}(t|\hat{t})A(\hat{t})d\hat{t}, (14.8)

because all NN neurons have fired their last spike in the interval [-,t][-\infty,t]. Since the number of neurons remains constant, the normalization of Eq. 14.8 must hold at arbitrary times tt. Eq. (14.8) is an implicit equation for the population activity AA and the starting point for the discussions in this and the following chapters.

Since Eq. (14.8) is rather abstract, we will put it into a form that is easier to grasp intuitively. To do so, we take the derivative of Eq. (14.8) with respect to tt. We find

0=SI(t|t)A(t)+-tSI(t|t^)tA(t^)dt^.0=S_{I}(t|t)A(t)+\int_{-\infty}^{t}\frac{\partial S_{I}(t|\hat{t})}{\partial t% }A(\hat{t})d\hat{t}. (14.9)

We now use PI(t|t^)=-tSI(t|t^)P_{I}(t|\hat{t})=-\frac{\partial}{\partial t}S_{I}(t|\hat{t}) and SI(t|t)=1S_{I}(t|t)=1 which is a direct consequence of Eq. (14.6). This yields the activity dynamics of Eq. (14.5).

We repeat an important remark concerning the normalization of the activity. Since Eq. (14.5) is defined as the derivative of Eq. (14.8), the integration constant on the left-hand side of Eq. (14.8) is lost. This is most easily seen for constant activity A(t)=A0A(t)=A_{0}. In this case the variable A0A_{0} can be eliminated on both sides of Eq. (14.5) so that it yields the trivial statement that the interval distribution is normalized to unity. Equation (14.5) is therefore invariant under a rescaling of the activity A0cA0A_{0}\rightarrow cA_{0} with any constant cc, as mentioned earlier. To get the correct normalization of the activity we have to return to Eq. (14.8).

Example: Absolute refractoriness and the Wilson-Cowan integral equation

Let us consider a population of Poisson neurons with an absolute refractory period Δabs\Delta^{\rm abs}. A neuron that is not refractory fires stochastically with a rate f[h(t)]f[h(t)] where h(t)=0κ(s)I(t-s)dsh(t)=\int_{0}^{\infty}\kappa(s)I(t-s){\text{d}}s is the total input potential caused by an external driving current or synaptic input from other neurons. After firing, a neuron is inactive during a time Δabs\Delta^{\rm abs}. The population activity of a homogeneous group of Poisson neurons with absolute refractoriness is (552)

A(t)=f[h(t)]{1-t-ΔabstA(t)dt}.A(t)=f[h(t)]\left\{1-\int_{t-\Delta^{\rm abs}}^{t}A(t^{\prime})dt^{\prime}% \right\}. (14.10)

Eq. (14.10) represents a special case of Eqs. (14.5) and (14.8) (see Exercises).

The Wilson-Cowan integral equation (14.10) has a simple interpretation. Neurons stimulated by a total postsynaptic potential h(t)h(t) fire with an instantaneous rate f[h(t)]f[h(t)]. If there were no refractoriness, we would expect a population activity A(t)=f[h(t)]A(t)=f[h(t)]. However, not all neurons may fire, since some of the neurons are in the absolute refractory period. The fraction of neurons that participate in firing is 1-t-ΔabstA(t)dt1-\int_{t-\Delta^{\rm abs}}^{t}A(t^{\prime})dt^{\prime} which explains the factor in curly brackets.

The function ff in Eq. (14.10) was introduced here as the stochastic intensity of an inhomogeneous Poisson process describing neurons in a homogeneous population. In this interpretation, Eq. (14.10) is the exact equation for the population activity of neurons with absolute refractoriness in the limit of NN\to\infty. In their original paper, Wilson and Cowan motivated the function ff by a distribution of threshold values in an inhomogeneous population. In this case, the population equation (14.10) is only an approximation since correlations are neglected (552).

For constant input potential, h(t)=h0=I00κ(s)I(t-s)dsh(t)=h_{0}=I_{0}\int_{0}^{\infty}\kappa(s)I(t-s){\text{d}}s, the population activity has a stationary solution (see Fig. 14.3)

A0=f(h0)1+Δabsf(h0)=g(h0).A_{0}=\frac{f(h_{0})}{1+\Delta^{\rm abs}f(h_{0})}=g(h_{0}). (14.11)

For the last equality sign we have used the definition of the gain function of Poisson neurons with absolute refractoriness in Eq. (7.50). Equation (14.11) tells us that in a homogeneous population of neurons the population activity in a stationary state is equal to the firing rate of individual neurons, as expected from the discussion in Ch. 12.

Sometimes the stationary solution from Eq. (14.11) is also used in the case of time-dependent input. However, the expression A0(t)=g(h(t))A_{0}(t)=g(h(t)) [with gg from Eq. (14.11)] does not correctly reflect the transients that are seen in the solution of the integral equation (14.10); cf. Fig. 14.3B and Chapter 15.

A B
Fig. 14.3: Wilson-Cowan model. A. Stationary activity calculated from the gain function A0=g(h)A_{0}=g(h) of Poisson neurons with absolute refractoriness of Δabs=4\Delta^{\rm abs}=4 ms; cf. Eq. (14.11). B. The response of the population activity to an abrupt change of the input current shows an oscillatory behavior (solid line) in the approach to the new stationary state. The oscillations are absent in a quasi-stationary approximation A0(t)=g(h(t))A_{0}(t)=g(h(t)) where the stationary solution from Eq. (14.11) is applied to explain time-dependent behavior (dotted line). The input potential vanishes for t0<100t_{0}<100 ms and is h(t)=1-exp[-(t-t0)/τm]h(t)=1-\exp[-(t-t_{0})/\tau_{m}] for t>t0t>t_{0} with τm=4\tau_{m}=4 ms. Exponential escape rate f(h)=τ0-1exp[β(h-ϑ)]f(h)=\tau_{0}^{-1}\exp[\beta(h-\vartheta)] with ϑ=1\vartheta=1, τ0=1\tau_{0}=1 ms, and β=2\beta=2. No lateral coupling (w0=0w_{0}=0).

14.1.4 Integral equation for adaptive neurons

The integral equations presented so far are restricted to non-adapting neurons, because we assumed that the knowledge of the input and of the last firing time was sufficient to characterize the state of a neuron. For adapting neurons, however, the whole spiking history can shape the interspike interval distribution. In this section we extend the integral equation (14.5) to the case of adapting neurons.

For an isolated adaptive neuron in the presence of noise, the probability density of firing around time tt will, in general, depend on its past firing times t^n<t^n-1<t^2<t^1=t^<t\hat{t}_{n}<\hat{t}_{n-1}\dots<\hat{t}_{2}<\hat{t}_{1}=\hat{t}<t where t^=t^1\hat{t}=\hat{t}_{1} denotes the most recent spike time. The central insight is that, in a population of neurons, we can approximate the past firing times by the population activity A(t^)A(\hat{t}) in the past. Indeed, the probability of one of the neurons having a past spike time t^k=t\hat{t}_{k}=t^{\prime} in the interval t^<t<t^+Δt\hat{t}<t^{\prime}<\hat{t}+\Delta t is A(t^)ΔtA(\hat{t})\Delta t.

Just as in the time-dependent renewal theory, we will treat the most recent spike t^1=t^\hat{t}_{1}=\hat{t} of each neuron explicitly. For all spikes t^k\hat{t}_{k} with k2k\geq 2 we approximate the actual spike train history of an individual by the average history as summarized in the population activity A(t)A(t). Let PI,A(t|t^)P_{I,A}(t|\hat{t}) be the probability of observing a spike at time tt given the last spike at time t^\hat{t}, the input-current and the activity history A(t)A(t) until time tt, then we can rewrite equation (14.5) in a format suitable for adaptive neurons

A(t)=-tPI,A(t|t^)A(t^)dt^.A(t)=\int_{-\infty}^{t}P_{I,A}(t|\hat{t})A(\hat{t})d\hat{t}. (14.12)

In Section 14.5 we explain the methods to describe populations of adaptive neurons in more detail. Appropriate numerical schemes (Section 14.1.5) make it possible to describe the response of a population of adaptive neurons to arbitrary time-dependent input; see Fig. 14.4.

Fig. 14.4: Population of adaptive neurons. The solution (black solid line, middle) of the integral equation for adaptive neurons (14.12) is compared with the population activity during a simulation of 25000 model neurons (thick gray line). Model neurons are generalized linear models of spiking neurons with parameters from cortical cells; cf. Chapter 11. Top: error (theory minus simulations) in the prediction of the population activity. Bottom: Driving stimulus h(t)=0exp(-s/τm)I(t-s)dsh(t)=\int_{0}^{\infty}\exp(-s/\tau_{m})I(t-s)\,{\text{d}}s in arbitrary units. Threshold in the limit of deterministic neurons located at h=0h=0 so that h<0h<0 indicates the subthreshold regime. Modified from Naud and Gerstner (358).

14.1.5 Numerical methods for integral equations (*)

Integral equations can be solved numerically. However, the type of equation we face in Eq. (14.5) or Eq. (14.12) can not be cast in the typical Volterra or Fredholm forms for which efficient numerical methods have been extensively studied (301; 31). In the following, we describe a method that can be used to solve Eq. (14.5), Eq. (14.12) or Eq. (14.8), but we take as an example the quasi-renewal equivalent of Eq. (14.8)

1=-tSI,A(t|t^)A(t^)dt^.1=\int_{-\infty}^{t}S_{I,A}(t|\hat{t})A(\hat{t}){\text{d}}\hat{t}. (14.13)

To derive a numerical algorithm, we proceed in three steps. First, truncation of the integral in Eq. (14.13) at some lower bound. Second, the discretization of this integral and, third, the discretization of the integral defining the survivor function. Here we will use the rectangle rule for the discretization of every integral. Note that adaptive quadratures or Monte-Carlo methods could present more efficient alternatives.

The first step is the truncation. The probability to survive a very long time is essentially zero. Let τc\tau_{c} be a period of time such that the survivor SI,A(t|t-τc)S_{I,A}(t|t-\tau_{c}) is very small. Then, we can truncate the infinite integral in 14.13

1=t-τctSI,A(t|t^)A(t^)dt^.1=\int_{t-\tau_{c}}^{t}S_{I,A}(t|\hat{t})A(\hat{t}){\text{d}}\hat{t}. (14.14)

Next, we proceed to discretize the integral on small bins of size Δt\Delta t. Let 𝐦(t)\mathbf{m}^{(t)} be the vector made of the fraction of neurons at tt with last spike within t^\hat{t} and t^+Δt\hat{t}+\Delta t. This definition means that the kkth element is mk(t)=S(t|t-kΔt)A(t-kΔt)Δtm_{k}^{(t)}=S(t|t-k\,\Delta t)A(t-k\,\Delta t)\Delta t. The element with k=0k=0 is then the momentary population activity in the time step starting at time tt, m0(t)=AtΔtm^{(t)}_{0}=A_{t}\Delta t, since S(t|t)=1S(t|t)=1. Therefore we arrive at

AtΔt=1-k=1Kmk(t).A_{t}\Delta t=1-\sum_{k=1}^{K}m_{k}^{(t)}. (14.15)

Finally, we discretize the integral defining the survival function to obtain mk(t)m_{k}^{(t)} as a function of mk(t-Δt)m_{k}^{(t-\Delta t)}. Because of S(t|t^)=exp[-t^tρ(t|t^)dt]=exp[-t-Δttρ(t|t^)dt]S(t-Δt|t^)S(t|\hat{t})=\exp[-\int_{\hat{t}}^{t}\rho(t^{\prime}|\hat{t}){\text{d}}t^{% \prime}]=\exp[-\int_{t-\Delta t}^{t}\rho(t^{\prime}|\hat{t}){\text{d}}t^{% \prime}]S(t-\Delta t|\hat{t}), we find for sufficiently small time steps

mk(t)=mk-1(t-Δt)exp[-ρ(t|t-kΔt)Δt]fork1m_{k}^{(t)}=m_{k-1}^{(t-\Delta t)}\exp\left[-\rho(t|t-k\,\Delta t)\Delta t% \right]\quad{\rm for}\quad k\geq 1 (14.16)

Note that we work in a moving coordinate because the index kk always counts time steps backward starting from the present time tt. Therefore mk(t)m_{k}^{(t)} and mk-1(t-Δt)m_{k-1}^{(t-\Delta t)} refer to the same group of neurons, i.e., those that have fired their last spike around time t^=t-kΔt=(t-Δt)-(k-1)(Δt))\hat{t}=t-k\Delta t=(t-\Delta t)-(k-1)(\Delta t)). Eq. (14.16) indicates that the number of neurons who survive from t^\hat{t} up to tt decrease in each time step by an exponential factor. In Eq. (14.16), ρ(t|t)\rho(t|t^{\prime}) can be either the renewal conditional intensity of a non-adaptive neuron model (e.g., Eq. (14.3)) or the effective ’quasi-renewal’ intensity of adaptive neurons (see Section 14.5). Together, Eq. (14.15) and (14.16) can be used to solve AtA_{t} iteratively.