Poisson processes do not account for neuronal refractoriness and cannot be used to describe realistic interspike interval distributions. In order to account for neuronal refractoriness in the stochastic description of spike trains, we need to switch from a Poisson process to a renewal process. Renewal process keep a memory of the last event (last firing time $\hat{t}$), but not of any earlier events. More precisely, spikes are generated in a renewal process, with a stochastic intensity (or ‘hazard’)

$\rho(t|\hat{t})=\rho_{0}(t-\hat{t})$ | (7.22) |

which depends on the time since the last spike. One of the simplest example of a renewal system is a Poisson process with absolute refractoriness which we have encountered already in the previous section; see Eq. (7.16).

Renewal processes are a class of stochastic point processes that describe a sequence of events in time (106; 386). Renewal systems in the narrow sense (stationary renewal processes), presuppose stationary input and are defined by the fact that the state of the system, and hence the probability of generating the next event, depends only on the ‘age’ $t-\hat{t}$ of the system, i.e., the time that has passed since the last event (last spike). The central assumption of renewal theory is that the state does not depend on earlier events (i.e., earlier spikes of the same neuron). The aim of renewal theory is to predict the probability of the next event given the age of the system. In other words, renewal theory allows to calculate the interval distribution

$P_{0}(s)=P(t^{(f)}+s|t^{(f)})$ | (7.23) |

i.e., the probability density that the next event occurs at time $t^{(f)}+s$ given that the last event was observed at time $t^{(f)}$.

While for a Poisson process all events occur independently, in a renewal process generation of events (spikes) depends on the previous event, so that events are not independent. However, since the dependence is restricted to the most recent event, intervals between subsequent events are independent. Therefore, an efficient way of generating a spike train of a renewal system is to draw interspike intervals from the distribution $P_{0}(s)$.

Example: Light bulb failure as a renewal system

A generic example of a renewal system is a light bulb. The event is the failure of the bulb and its subsequent exchange. Obviously, the state of the system only depends on the age of the current bulb, and not on that of any previous bulb that has already been exchanged. If the usage pattern of the bulbs is stationary (e.g., the bulb is switched on during 10 hours each night) then we have a stationary renewal process. The aim of renewal theory is to calculate the probability of the next failure given the age of the bulb.

The interval distribution $P(t|\hat{t})$ as defined above is a probability density. Thus, integration of $P(t|\hat{t})$ over time yields a probability. For example, $\int_{\hat{t}}^{t}P(t^{\prime}|\hat{t})\,{\text{d}}t^{\prime}$ is the probability that a neuron which has emitted a spike at $\hat{t}$ fires the next action potential between $\hat{t}$ and $t$. Thus

$S(t|\hat{t})=1-\int_{\hat{t}}^{t}P(t^{\prime}|\hat{t})\,{\text{d}}t^{\prime}$ | (7.24) |

is the probability that the neuron stays quiescent between $\hat{t}$ and $t$. $S(t|\hat{t})$ is called the survivor function: it gives the probability that the neuron ‘survives’ from $\hat{t}$ to $t$ without firing.

The survivor function $S(t|\hat{t})$ has an initial value $S(\hat{t}|\hat{t})=1$ and decreases to zero for $t\to\infty$. The rate of decay of $S(t|\hat{t})$ will be denoted by $\rho(t|\hat{t})$ and is defined by

$\rho(t|\hat{t})=-{\frac{{\text{d}}}{{\text{d}}t}S(t|\hat{t})\,\over\,S(t|\hat{% t})}={P(t|\hat{t})\over 1-\int_{\hat{t}}^{t}P(t^{\prime}|\hat{t})\,{\text{d}}t% ^{\prime}}\,.$ | (7.25) |

In the language of renewal theory, $\rho(t|\hat{t})$ is called the ‘age-dependent death rate’ or ‘hazard’ (106; 105).

Integration of the differential equation $dS/dt=-\rho\,S$ [cf. the first identity in Eq. (7.25)] yields the survivor function

$S(t|\hat{t})=\exp\left[-\int_{\hat{t}}^{t}\rho(t^{\prime}|\hat{t})\,{\text{d}}% t^{\prime}\right]\,.$ | (7.26) |

According to the definition of the survivor function in Eq. (7.24), the interval distribution is given by

$P(t|\hat{t})=-\frac{{\text{d}}}{{\text{d}}t}S(t|\hat{t})=\rho(t|\hat{t})\,S(t|% \hat{t})\,,$ | (7.27) |

which has a nice intuitive interpretation: In order to emit its next spike at $t$, the neuron has to survive the interval $(\hat{t},t)$ without firing and then fire at $t$. The survival probability is $S(t|\hat{t})$ and the hazard of firing a spike at time $t$ is $\rho(t|\hat{t})$. Multiplication of the survivor function $S$ with the momentary hazard $\rho$ gives the two factors on the right-hand side of Eq. (7.27). Inserting Eq. (7.26) in (7.27), we obtain an explicit expression for the interval distribution in terms of the hazard:

$P(t|\hat{t})=\rho(t|\hat{t})\,\exp\left[-\int_{\hat{t}}^{t}\rho(t^{\prime}|% \hat{t})\,{\text{d}}t^{\prime}\right]\,.$ | (7.28) |

On the other hand, given the interval distribution we can derive the hazard from Eq. (7.25). Thus, each of the three quantities $\rho(t|\hat{t})$, $P(t|\hat{t})$, and $S(t|\hat{t})$ is sufficient to describe the statistical properties of a renewal system. Since we focus on stationary renewal systems, the notation can be simplified and Eqs. (7.24)–(7.28) hold with the replacement

$\displaystyle P(t|\hat{t})$ | $\displaystyle=$ | $\displaystyle P_{0}(t-\hat{t})$ | (7.29) | ||

$\displaystyle S(t|\hat{t})$ | $\displaystyle=$ | $\displaystyle S_{0}(t-\hat{t})$ | (7.30) | ||

$\displaystyle\rho(t|\hat{t})$ | $\displaystyle=$ | $\displaystyle\rho_{0}(t-\hat{t})\,.$ | (7.31) |

Eqs. (7.24) - (7.28) are standard results of renewal theory. The notation that we have chosen in Eqs. (7.24) - (7.28) will turn out to be useful in later chapters and highlights the fact that these quantities are conditional probabilities, probability densities, or rates.

A | B | C |
---|---|---|

Example: From interval distribution to hazard function

Let us suppose that we have found under stationary experimental conditions an interval distribution that can be approximated as

$P_{0}(s)=\left\{\begin{array}[]{*{2}{c@{\qquad}}c}0\qquad&{\rm for}\qquad&s% \leq\Delta^{\rm abs}\\ a_{0}\,(s-\Delta^{\rm abs})\,e^{-{a_{0}\over 2}(s-\Delta^{\rm abs})^{2}}\qquad% &{\rm for}\qquad&s>\Delta^{\rm abs}\end{array}\right.$ | (7.32) |

with a constant $a_{0}>0$; cf. Fig. 7.10B. From Eq. (7.25), the hazard is found to be

$\rho_{0}(s)=\left\{\begin{array}[]{*{2}{c@{\qquad}}c}0\qquad&{\rm for}\qquad&s% \leq\Delta^{\rm abs}\\ a_{0}\,(s-\Delta^{\rm abs})\qquad&{\rm for}\qquad&s>\Delta^{\rm abs}\,.\end{% array}\right.$ | (7.33) |

Thus, during an interval $\Delta^{\rm abs}$ after each spike the hazard vanishes. We may interpret $\Delta^{\rm abs}$ as the absolute refractory time of the neuron. For $s>\Delta^{\rm abs}$ the hazard increases linearly, i.e., the longer the neuron waits the higher its probability of firing. In Chapter 9, the hazard of Eq. (7.33) can be interpreted as the instantaneous rate of a non-leaky integrate-and-fire neuron subject to noise.

Renewal theory is usually associated with stationary input conditions. The interval distribution $P_{0}$ can then be estimated experimentally from a single long spike train. The applicability of renewal theory relies on the hypothesis that a memory back to the last spike suffices to describe the spike statistics. In particular, there should be no correlation between one interval and the next. In experiments, the renewal hypothesis, can be tested by measuring the correlation between subsequent intervals. Under some experimental conditions, correlations are small indicating that a description of spiking as a stationary renewal process is a good approximation (192); however, under experimental conditions where neuronal adaptation is strong, intervals are not independent (Fig. 7.11). Given a time series of events with variable intervals $s_{j}$, a common measure of memory effects is the serial correlation coefficients

$c_{k}=\frac{\langle s_{j+k}s_{j}\rangle_{j}-\langle s_{j}\rangle_{j}^{2}}{% \langle s_{j}^{2}\rangle-\langle s_{j}\rangle^{2}}.$ | (7.34) |

Spike-frequency adaptation causes a negative correlation between subsequent intervals ($c_{1}<0$). Long intervals are most likely followed by short ones, and vice versa, so that the assumption of renewal theory does not hold (463; 423; 93).

The notion of stationary input conditions is a mathematical concept that cannot be easily translated into experiments. With intracellular recordings under in vitro conditions, constant input current can be imposed and thus the renewal hypothesis can be tested directly. Under in vivo conditions, the assumption that the input current to a neuron embedded in a large neural system is constant (or has stationary statistics) is questionable; see (392; 393) for a discussion. While the externally controlled stimulus can be made stationary (e.g., a grating drifting at constant speed), the input to an individual neuron is out of control.

Let us suppose that, for a given experiment, we have checked that the renewal hypothesis holds to a reasonable degree of accuracy. From the experimental interval distribution $P_{0}$ we can then calculate the survivor function $S_{0}$ and the hazard $\rho_{0}$ via Eqs. (7.24) and (7.25). If some additional assumptions regarding the nature of the noise are made, the form of the hazard $\rho_{0}(t|\hat{t})$ can be interpreted in terms of neuronal dynamics. In particular, a reduced hazard immediately after a spike is a signature of neuronal refractoriness (192; 52).

Example: Plausible hazard function and interval distributions

Interval distributions and hazard functions have been measured in many experiments. For example, in auditory neurons of the cat driven by stationary stimuli, the hazard function $\rho_{0}(t-\hat{t})$ increases, after an absolute refractory time, to a constant level (192). We approximate the time course of the hazard function as

$\rho_{0}(s)=\left\{\begin{array}[]{*{2}{c@{\qquad}}c}0\qquad&{\rm for}\qquad&s% \leq\Delta^{\rm abs}\\ \nu\,[1-e^{-\lambda\,(s-\Delta^{\rm abs})}]\qquad&{\rm for}\qquad&s>\Delta^{% \rm abs}\end{array}\right.$ | (7.35) |

with parameters $\Delta^{\rm abs},\lambda$, and $\nu$; Fig. 7.10C. In Chapter 9 we will see how the hazard (7.35) can be related to neuronal dynamics. Given the hazard function, we can calculate the survivor function and interval distributions. Application of Eq. (7.26) yields

$S_{0}(s)=\left\{\begin{array}[]{*{2}{c@{\qquad}}c}1\qquad&{\rm for}\qquad&s<% \Delta^{\rm abs}\\ e^{-\nu\,(s-\Delta^{\rm abs})}\,e^{\rho_{0}(s)/\lambda}\qquad&{\rm for}\qquad&% s>\Delta^{\rm abs}\end{array}\right.$ | (7.36) |

The interval distribution is given by $P_{0}(s)=\rho_{0}(s)\,S_{0}(s)$. Interval distribution, survivor function, and hazard are shown in Fig. 7.10C.

We may compare the above hazard function and interval distribution with that of the Poisson neuron with absolute refractoriness. The main difference is that the hazard in Eq. (7.16) jumps from the state of absolute refractoriness to a constant firing rate, whereas in Eq. (7.35) the transition is smooth.

In case of a stationary renewal process, the interval distribution $P_{0}$ contains all the statistical information so that the autocorrelation function and noise spectrum can be derived. In this section we calculate the noise spectrum of a stationary renewal process. As we have seen above, the noise spectrum of a neuron is directly related to the autocorrelation function of its spike train. Both noise spectrum and autocorrelation function are experimentally accessible.

Let $\nu_{i}=\langle S_{i}\rangle$ denote the mean firing rate (expected number of spikes per unit time) of the spike train. Thus the probability of finding a spike in a short segment $[t,t+\Delta t]$ of the spike train is $\nu\,\Delta t$. For large intervals $s$, firing at time $t+s$ is independent from whether or not there was a spike at time $t$. Therefore, the expectation to find a spike at $t$ and another spike at $t+s$ approaches for $s\to\infty$ a limiting value $\lim_{s\to\infty}\langle S_{i}(t)\,S_{i}(t+s)\rangle=\lim_{s\to\infty}C_{ii}(s% )=\nu_{i}^{2}$. It is convenient to subtract this baseline value and introduce a ‘normalized’ autocorrelation,

$C^{0}_{ii}(s)=C_{ii}(s)-\nu_{i}^{2}\,,$ | (7.37) |

with $\lim_{s\to\infty}C_{ii}^{0}(s)=0$. The Fourier transform of Eq. (7.37) yields

$\hat{C}_{ii}(\omega)=\hat{C}_{ii}^{0}(\omega)+2\pi\nu_{i}^{2}\,\delta(\omega)\,.$ | (7.38) |

Thus $\hat{C}_{ii}(\omega)$ diverges at $\omega=0$; the divergence is removed by switching to the normalized autocorrelation. In the following we will calculate the noise spectrum $\hat{C}_{ii}(\omega)$ for $\omega\neq 0$.

A | B |
---|---|

In the case of a stationary renewal process, the autocorrelation function is closely related to the interval distribution $P_{0}(s)$. This relation will now be derived. Let us suppose that we have found a first spike at $t$. To calculate the autocorrelation we need the probability density for a spike at $t+s$. Let us construct an expression for $C_{ii}(s)$ for $s>0$. The correlation function for positive $s$ will be denoted by $\nu_{i}\,C_{+}(s)$ or

$C_{+}(s)={1\over\nu_{i}}\,C_{ii}(s)\,\Theta(s)\,.$ | (7.39) |

The factor $\nu_{i}$ in Eq. (7.39) takes care of the fact that we expect a first spike at $t$ with rate $\nu_{i}$. $C_{+}(s)$ gives the conditional probability density that, given a spike at $t$, we will find another spike at $t+s>t$. The spike at $t+s$ can be the first spike after $t$, or the second one, or the $n$th one; see Fig. 7.12. Thus for $s>0$

$\displaystyle C_{+}(s)$ | $\displaystyle=$ | $\displaystyle P_{0}(s)+\int_{0}^{\infty}P_{0}(s^{\prime})\,P_{0}(s-s^{\prime})% \,{\text{d}}s^{\prime}$ | (7.40) | ||

$\displaystyle+\int_{0}^{\infty}\int_{0}^{\infty}P_{0}(s^{\prime})\,P_{0}(s^{% \prime\prime})\,P_{0}(s-s^{\prime}-s^{\prime\prime})\,{\text{d}}s^{\prime}\,{% \text{d}}s^{\prime\prime}+\dots$ |

or

$C_{+}(s)=P_{0}(s)+\int_{0}^{\infty}P_{0}(s^{\prime})\,C_{+}(s-s^{\prime})\,{% \text{d}}s^{\prime}\,$ | (7.41) |

as can be seen by inserting Eq. (7.40) on the right-hand side of (7.41); see Fig. 7.13.

Due to the symmetry of $C_{ii}$, we have $C_{ii}(s)=\nu\,C_{+}(-s)$ for $s<0$. Finally, for $s=0$, the autocorrelation has a $\delta$ peak reflecting the trivial autocorrelation of each spike with itself. Hence,

$C_{ii}(s)=\nu_{i}\left[\delta(s)+C_{+}(s)+C_{+}(-s)\right]\,.$ | (7.42) |

In order to solve Eq. (7.41) for $C_{+}$ we take the Fourier transform of Eq. (7.41) and find

$\hat{C}_{+}(\omega)={\hat{P}_{0}(\omega)\over 1-\hat{P}_{0}(\omega)}\,.$ | (7.43) |

Together with the Fourier transform of Eq. (7.42), $\hat{C}_{ii}=\nu_{i}\,[1+2\,\text{Re}\{\hat{C}_{+}(\omega)\}]$, we obtain

$\hat{C}_{ii}(\omega)=\nu_{i}\,\text{Re}\left\{{1+\hat{P}_{0}(\omega)\over 1-% \hat{P}_{0}(\omega)}\right\}\quad{\rm for}\quad\omega\neq 0\,.$ | (7.44) |

For $\omega=0$, the Fourier integral over the right-hand side of Eq. (7.40) diverges, since $\int_{0}^{\infty}P_{0}(s){\text{d}}s=1$. If we add the diverging term from Eq. (7.38), we arrive at

$\hat{C}_{ii}(\omega)=\nu_{i}\,\text{Re}\left\{{1+\hat{P}_{0}(\omega)\over 1-% \hat{P}_{0}(\omega)}\right\}+2\pi\,\nu_{i}^{2}\delta(\omega).$ | (7.45) |

This is a standard result of stationary renewal theory (105) which has been repeatedly applied to neuronal spike trains (137; 36).

Example: Stationary Poisson process

In Section 7.2.1 and Section 7.5 we have already discussed the Poisson neuron from the perspective of mean firing rate and renewal theory, respectively. The autocorrelation of a Poisson process is

$C_{ii}(s)=\nu\,\delta(s)+\nu^{2}.$ | (7.46) |

We want to show that Eq. (7.46) follows from Eq. (7.40).

Since the interval distribution of a Poisson process is exponential [cf. Eq. (7.16) with $\Delta^{\rm abs}=0$], we can evaluate the integrals on the right-hand side of Eq. (7.40) in a straightforward manner. The result is

$C_{+}(s)=\nu\,e^{-\nu\,s}\big[1+\nu\,s+{1\over 2}(\nu\,s)^{2}+\dots\big]=\nu\,.$ | (7.47) |

Hence, with Eq. (7.42), we obtain the autocorrelation function (7.46) of a homogeneous Poisson process. The Fourier transform of Eq. (7.46) yields a flat spectrum with a $\delta$ peak at zero:

$\hat{C}_{ii}(\omega)=\nu+2\pi\,\nu^{2}\,\delta(\omega)\,.$ | (7.48) |

The result could have also been obtained by evaluating Eq. (7.45).

Example: Poisson process with absolute refractoriness

We return to the Poisson process with absolute refractoriness defined in Eq. (7.16). Apart from an absolute refractory time $\Delta^{\rm abs}$, the neuron fires with rate $r$. For $\omega\neq 0$, Eq. (7.45) yields the noise spectrum

$\hat{C}_{ii}(\omega)=\nu\,\left\{1+2\,{\nu^{2}\over\omega^{2}}\,[1-\cos(\omega% \,\Delta^{\rm abs})]+2{\nu\over\omega}\,\sin(\omega\,\Delta^{\rm abs})\right\}% ^{-1}\,,$ | (7.49) |

cf. Fig. (7.12)B. In contrast to the stationary Poisson process Eq. (7.46), the noise spectrum of a neuron with absolute refractoriness $\Delta^{\rm abs}>0$ is no longer flat. In particular, for $\omega\to 0$, the noise spectrum is decreased by a factor $[1+2(\nu\,\Delta^{\rm abs})+(\nu\,\Delta^{\rm abs})^{2}]^{-1}$. Eq. (7.49) and generalizations thereof have been used to fit the power spectrum of, e.g., auditory neurons (137) and MT neurons (36).

Can we understand the decrease in the noise spectrum for $\omega\to 0$? The mean interval of a Poisson neuron with absolute refractoriness is $\langle s\rangle=\Delta^{\rm abs}+r^{-1}$. Hence the mean firing rate is

$\nu={r\over 1+\Delta^{\rm abs}\,r}\,.$ | (7.50) |

For $\Delta^{\rm abs}=0$ we retrieve the stationary Poisson process Eq. (7.3) with $\nu=r$. For finite $\Delta^{\rm abs}$ the firing is more regular than that of a Poisson process with the same mean rate $\nu$. We note that for finite $\Delta^{\rm abs}>0$, the mean firing rate remains bounded even if $r\to\infty$. The neuron fires then regularly with period $\Delta^{\rm abs}$. Because the spike train of a neuron with refractoriness is more regular than that of a Poisson neuron with the same mean rate, the spike count over a long interval, and hence the spectrum for $\omega\to 0$, is less noisy. This means that Poisson neurons with absolute refractoriness can transmit slow signals more reliably than a simple Poisson process.

It is possible to use the renewal concept in a broader sense and define a renewal process as a system where the state at time $t$ (and hence the probability of generating an event at $t$), depends both on the time that has passed since the last event (i.e., the firing time $\hat{t}$) and the input $I(t^{\prime})$, $\hat{t}<t^{\prime}<t$, that the system received since the last event. Input-dependent renewal systems are also called modulated renewal processes (425), non-stationary renewal systems (182; 183), or inhomogeneous Markov interval processes (251). The aim of a theory of input-dependent renewal systems is to predict the probability density

$P_{I}(t|\hat{t})$ | (7.51) |

of the next event to occur at time $t$, given the timing $\hat{t}$ of the last event and the input $I(t^{\prime})$ for $\hat{t}<t^{\prime}<t$; see Fig. 7.14. The relation between hazard, survivor function, and interval distribution for the input-dependent case is the same as the one given in Equations (7.25) – (7.28). The generalization to a time-dependent renewal theory will be useful later on in Chapter 9.

The lower index $I$ of $P_{I}(t|\hat{t})$ is intended to remind the reader that the probability density $P_{I}(t|\hat{t})$ depends on the time course of the input $I(t^{\prime})$ for $t^{\prime}<t$. Since $P_{I}(t|\hat{t})$ is conditioned on the spike at $\hat{t}$, it can be called a spike-triggered spike density. We interpret $P_{I}(t\,|\,\hat{t})$ as the distribution of interspike intervals in the presence of an input current $I$ or as the input-dependent interval distribution. For stationary input, $P_{I}(t|\hat{t})$ reduces to $P_{0}(t-\hat{t})$.

A | B |
---|---|

**© Cambridge University Press**. This book is in copyright. No reproduction of any part of it may take place without the written permission of Cambridge University Press.