7 Variability of Spike Trains and Neural Codes

7.5 Renewal statistics

Poisson processes do not account for neuronal refractoriness and cannot be used to describe realistic interspike interval distributions. In order to account for neuronal refractoriness in the stochastic description of spike trains, we need to switch from a Poisson process to a renewal process. Renewal process keep a memory of the last event (last firing time t^\hat{t}), but not of any earlier events. More precisely, spikes are generated in a renewal process, with a stochastic intensity (or ‘hazard’)

ρ(t|t^)=ρ0(t-t^)\rho(t|\hat{t})=\rho_{0}(t-\hat{t}) (7.22)

which depends on the time since the last spike. One of the simplest example of a renewal system is a Poisson process with absolute refractoriness which we have encountered already in the previous section; see Eq. (7.16).

Renewal processes are a class of stochastic point processes that describe a sequence of events in time (106; 386). Renewal systems in the narrow sense (stationary renewal processes), presuppose stationary input and are defined by the fact that the state of the system, and hence the probability of generating the next event, depends only on the ‘age’ t-t^t-\hat{t} of the system, i.e., the time that has passed since the last event (last spike). The central assumption of renewal theory is that the state does not depend on earlier events (i.e., earlier spikes of the same neuron). The aim of renewal theory is to predict the probability of the next event given the age of the system. In other words, renewal theory allows to calculate the interval distribution

P0(s)=P(t(f)+s|t(f))P_{0}(s)=P(t^{(f)}+s|t^{(f)}) (7.23)

i.e., the probability density that the next event occurs at time t(f)+st^{(f)}+s given that the last event was observed at time t(f)t^{(f)}.

While for a Poisson process all events occur independently, in a renewal process generation of events (spikes) depends on the previous event, so that events are not independent. However, since the dependence is restricted to the most recent event, intervals between subsequent events are independent. Therefore, an efficient way of generating a spike train of a renewal system is to draw interspike intervals from the distribution P0(s)P_{0}(s).

Example: Light bulb failure as a renewal system

A generic example of a renewal system is a light bulb. The event is the failure of the bulb and its subsequent exchange. Obviously, the state of the system only depends on the age of the current bulb, and not on that of any previous bulb that has already been exchanged. If the usage pattern of the bulbs is stationary (e.g., the bulb is switched on during 10 hours each night) then we have a stationary renewal process. The aim of renewal theory is to calculate the probability of the next failure given the age of the bulb.

7.5.1 Survivor function and hazard

The interval distribution P(t|t^)P(t|\hat{t}) as defined above is a probability density. Thus, integration of P(t|t^)P(t|\hat{t}) over time yields a probability. For example, t^tP(t|t^)dt\int_{\hat{t}}^{t}P(t^{\prime}|\hat{t})\,{\text{d}}t^{\prime} is the probability that a neuron which has emitted a spike at t^\hat{t} fires the next action potential between t^\hat{t} and tt. Thus

S(t|t^)=1-t^tP(t|t^)dtS(t|\hat{t})=1-\int_{\hat{t}}^{t}P(t^{\prime}|\hat{t})\,{\text{d}}t^{\prime} (7.24)

is the probability that the neuron stays quiescent between t^\hat{t} and tt. S(t|t^)S(t|\hat{t}) is called the survivor function: it gives the probability that the neuron ‘survives’ from t^\hat{t} to tt without firing.

The survivor function S(t|t^)S(t|\hat{t}) has an initial value S(t^|t^)=1S(\hat{t}|\hat{t})=1 and decreases to zero for tt\to\infty. The rate of decay of S(t|t^)S(t|\hat{t}) will be denoted by ρ(t|t^)\rho(t|\hat{t}) and is defined by

ρ(t|t^)=-ddtS(t|t^)S(t|t^)=P(t|t^)1-t^tP(t|t^)dt.\rho(t|\hat{t})=-{\frac{{\text{d}}}{{\text{d}}t}S(t|\hat{t})\,\over\,S(t|\hat{% t})}={P(t|\hat{t})\over 1-\int_{\hat{t}}^{t}P(t^{\prime}|\hat{t})\,{\text{d}}t% ^{\prime}}\,. (7.25)

In the language of renewal theory, ρ(t|t^)\rho(t|\hat{t}) is called the ‘age-dependent death rate’ or ‘hazard’ (106; 105).

Integration of the differential equation dS/dt=-ρSdS/dt=-\rho\,S [cf. the first identity in Eq. (7.25)] yields the survivor function

S(t|t^)=exp[-t^tρ(t|t^)dt].S(t|\hat{t})=\exp\left[-\int_{\hat{t}}^{t}\rho(t^{\prime}|\hat{t})\,{\text{d}}% t^{\prime}\right]\,. (7.26)

According to the definition of the survivor function in Eq. (7.24), the interval distribution is given by

P(t|t^)=-ddtS(t|t^)=ρ(t|t^)S(t|t^),P(t|\hat{t})=-\frac{{\text{d}}}{{\text{d}}t}S(t|\hat{t})=\rho(t|\hat{t})\,S(t|% \hat{t})\,, (7.27)

which has a nice intuitive interpretation: In order to emit its next spike at tt, the neuron has to survive the interval (t^,t)(\hat{t},t) without firing and then fire at tt. The survival probability is S(t|t^)S(t|\hat{t}) and the hazard of firing a spike at time tt is ρ(t|t^)\rho(t|\hat{t}). Multiplication of the survivor function SS with the momentary hazard ρ\rho gives the two factors on the right-hand side of Eq. (7.27). Inserting Eq. (7.26) in (7.27), we obtain an explicit expression for the interval distribution in terms of the hazard:

P(t|t^)=ρ(t|t^)exp[-t^tρ(t|t^)dt].P(t|\hat{t})=\rho(t|\hat{t})\,\exp\left[-\int_{\hat{t}}^{t}\rho(t^{\prime}|% \hat{t})\,{\text{d}}t^{\prime}\right]\,. (7.28)

On the other hand, given the interval distribution we can derive the hazard from Eq. (7.25). Thus, each of the three quantities ρ(t|t^)\rho(t|\hat{t}), P(t|t^)P(t|\hat{t}), and S(t|t^)S(t|\hat{t}) is sufficient to describe the statistical properties of a renewal system. Since we focus on stationary renewal systems, the notation can be simplified and Eqs. (7.24)–(7.28) hold with the replacement

P(t|t^)\displaystyle P(t|\hat{t}) =\displaystyle= P0(t-t^)\displaystyle P_{0}(t-\hat{t}) (7.29)
S(t|t^)\displaystyle S(t|\hat{t}) =\displaystyle= S0(t-t^)\displaystyle S_{0}(t-\hat{t}) (7.30)
ρ(t|t^)\displaystyle\rho(t|\hat{t}) =\displaystyle= ρ0(t-t^).\displaystyle\rho_{0}(t-\hat{t})\,. (7.31)

Eqs. (7.24) - (7.28) are standard results of renewal theory. The notation that we have chosen in Eqs. (7.24) - (7.28) will turn out to be useful in later chapters and highlights the fact that these quantities are conditional probabilities, probability densities, or rates.

A B C
Fig. 7.10: Interval distribution P0(s)P_{0}(s) (top), survivor function S0(s)S_{0}(s) (middle) for three different hazard functions (bottom). A. Hazard function corresponds to a Poisson neuron with absolute refractoriness of 5 ms. B. Hazard function defined by ρ0(s)=a0(s-Δabs)Θ(s-Δabs)\rho_{0}(s)=a_{0}\,(s-\Delta^{\rm abs})\,\Theta(s-\Delta^{\rm abs}) with a0=0.01a_{0}=0.01\,ms-2{}^{-2} and Δabs=2\Delta^{\rm abs}=2 ms. C. Hazard function defined by ρ0(s)=ν{1-exp[-λ(s-Δabs)]}Θ(s-Δabs)\rho_{0}(s)=\nu\{1-\,\exp[-\lambda\,(s-\Delta^{\rm abs})]\}\,\Theta(s-\Delta^{% \rm abs}) with ν=0.1\nu=0.1 kHz, λ=0.2\lambda=0.2 kHz, and Δabs=2\Delta^{\rm abs}=2 ms.

Example: From interval distribution to hazard function

Let us suppose that we have found under stationary experimental conditions an interval distribution that can be approximated as

P0(s)={0forsΔabsa0(s-Δabs)e-a02(s-Δabs)2fors>ΔabsP_{0}(s)=\left\{\begin{array}[]{*{2}{c@{\qquad}}c}0\qquad&{\rm for}\qquad&s% \leq\Delta^{\rm abs}\\ a_{0}\,(s-\Delta^{\rm abs})\,e^{-{a_{0}\over 2}(s-\Delta^{\rm abs})^{2}}\qquad% &{\rm for}\qquad&s>\Delta^{\rm abs}\end{array}\right. (7.32)

with a constant a0>0a_{0}>0; cf. Fig. 7.10B. From Eq. (7.25), the hazard is found to be

ρ0(s)={0forsΔabsa0(s-Δabs)fors>Δabs.\rho_{0}(s)=\left\{\begin{array}[]{*{2}{c@{\qquad}}c}0\qquad&{\rm for}\qquad&s% \leq\Delta^{\rm abs}\\ a_{0}\,(s-\Delta^{\rm abs})\qquad&{\rm for}\qquad&s>\Delta^{\rm abs}\,.\end{% array}\right. (7.33)

Thus, during an interval Δabs\Delta^{\rm abs} after each spike the hazard vanishes. We may interpret Δabs\Delta^{\rm abs} as the absolute refractory time of the neuron. For s>Δabss>\Delta^{\rm abs} the hazard increases linearly, i.e., the longer the neuron waits the higher its probability of firing. In Chapter 9, the hazard of Eq. (7.33) can be interpreted as the instantaneous rate of a non-leaky integrate-and-fire neuron subject to noise.

7.5.2 Renewal theory and experiments

Renewal theory is usually associated with stationary input conditions. The interval distribution P0P_{0} can then be estimated experimentally from a single long spike train. The applicability of renewal theory relies on the hypothesis that a memory back to the last spike suffices to describe the spike statistics. In particular, there should be no correlation between one interval and the next. In experiments, the renewal hypothesis, can be tested by measuring the correlation between subsequent intervals. Under some experimental conditions, correlations are small indicating that a description of spiking as a stationary renewal process is a good approximation (192); however, under experimental conditions where neuronal adaptation is strong, intervals are not independent (Fig. 7.11). Given a time series of events with variable intervals sjs_{j}, a common measure of memory effects is the serial correlation coefficients

ck=sj+ksjj-sjj2sj2-sj2.c_{k}=\frac{\langle s_{j+k}s_{j}\rangle_{j}-\langle s_{j}\rangle_{j}^{2}}{% \langle s_{j}^{2}\rangle-\langle s_{j}\rangle^{2}}. (7.34)

Spike-frequency adaptation causes a negative correlation between subsequent intervals (c1<0c_{1}<0). Long intervals are most likely followed by short ones, and vice versa, so that the assumption of renewal theory does not hold (463; 423; 93).

Fig. 7.11: Limitations of the renewal assumption. A. The interval distribution P0(s)P_{0}(s) of an afferent sensory fiber in the weakly electric fish exhibits periodic peaks, which are associated to the background oscillation normalized to period T=1T=1. The most likely interval is exactly one period, but longer intervals are possible. B. Testing for renewal in a plot of the joint density of the interval ii (horizontal axis) and the next interval i+1i+1 (vertical axis). Size of symbol indicates probability of occurrence. The most likely sequence is that an interval of length 4 is followed by an interval of length 1; vice versa, a short interval of length 1 is most likely followed by a long interval of length 4. Modified from (423).

The notion of stationary input conditions is a mathematical concept that cannot be easily translated into experiments. With intracellular recordings under in vitro conditions, constant input current can be imposed and thus the renewal hypothesis can be tested directly. Under in vivo conditions, the assumption that the input current to a neuron embedded in a large neural system is constant (or has stationary statistics) is questionable; see (392; 393) for a discussion. While the externally controlled stimulus can be made stationary (e.g., a grating drifting at constant speed), the input to an individual neuron is out of control.

Let us suppose that, for a given experiment, we have checked that the renewal hypothesis holds to a reasonable degree of accuracy. From the experimental interval distribution P0P_{0} we can then calculate the survivor function S0S_{0} and the hazard ρ0\rho_{0} via Eqs. (7.24) and  (7.25). If some additional assumptions regarding the nature of the noise are made, the form of the hazard ρ0(t|t^)\rho_{0}(t|\hat{t}) can be interpreted in terms of neuronal dynamics. In particular, a reduced hazard immediately after a spike is a signature of neuronal refractoriness (192; 52).

Example: Plausible hazard function and interval distributions

Interval distributions and hazard functions have been measured in many experiments. For example, in auditory neurons of the cat driven by stationary stimuli, the hazard function ρ0(t-t^)\rho_{0}(t-\hat{t}) increases, after an absolute refractory time, to a constant level (192). We approximate the time course of the hazard function as

ρ0(s)={0forsΔabsν[1-e-λ(s-Δabs)]fors>Δabs\rho_{0}(s)=\left\{\begin{array}[]{*{2}{c@{\qquad}}c}0\qquad&{\rm for}\qquad&s% \leq\Delta^{\rm abs}\\ \nu\,[1-e^{-\lambda\,(s-\Delta^{\rm abs})}]\qquad&{\rm for}\qquad&s>\Delta^{% \rm abs}\end{array}\right. (7.35)

with parameters Δabs,λ\Delta^{\rm abs},\lambda, and ν\nu; Fig. 7.10C. In Chapter 9 we will see how the hazard (7.35) can be related to neuronal dynamics. Given the hazard function, we can calculate the survivor function and interval distributions. Application of Eq. (7.26) yields

S0(s)={1fors<Δabse-ν(s-Δabs)eρ0(s)/λfors>ΔabsS_{0}(s)=\left\{\begin{array}[]{*{2}{c@{\qquad}}c}1\qquad&{\rm for}\qquad&s<% \Delta^{\rm abs}\\ e^{-\nu\,(s-\Delta^{\rm abs})}\,e^{\rho_{0}(s)/\lambda}\qquad&{\rm for}\qquad&% s>\Delta^{\rm abs}\end{array}\right. (7.36)

The interval distribution is given by P0(s)=ρ0(s)S0(s)P_{0}(s)=\rho_{0}(s)\,S_{0}(s). Interval distribution, survivor function, and hazard are shown in Fig. 7.10C.

We may compare the above hazard function and interval distribution with that of the Poisson neuron with absolute refractoriness. The main difference is that the hazard in Eq. (7.16) jumps from the state of absolute refractoriness to a constant firing rate, whereas in Eq. (7.35) the transition is smooth.

7.5.3 Autocorrelation and noise spectrum of a renewal process (*)

In case of a stationary renewal process, the interval distribution P0P_{0} contains all the statistical information so that the autocorrelation function and noise spectrum can be derived. In this section we calculate the noise spectrum of a stationary renewal process. As we have seen above, the noise spectrum of a neuron is directly related to the autocorrelation function of its spike train. Both noise spectrum and autocorrelation function are experimentally accessible.

Let νi=Si\nu_{i}=\langle S_{i}\rangle denote the mean firing rate (expected number of spikes per unit time) of the spike train. Thus the probability of finding a spike in a short segment [t,t+Δt][t,t+\Delta t] of the spike train is νΔt\nu\,\Delta t. For large intervals ss, firing at time t+st+s is independent from whether or not there was a spike at time tt. Therefore, the expectation to find a spike at tt and another spike at t+st+s approaches for ss\to\infty a limiting value limsSi(t)Si(t+s)=limsCii(s)=νi2\lim_{s\to\infty}\langle S_{i}(t)\,S_{i}(t+s)\rangle=\lim_{s\to\infty}C_{ii}(s% )=\nu_{i}^{2}. It is convenient to subtract this baseline value and introduce a ‘normalized’ autocorrelation,

Cii0(s)=Cii(s)-νi2,C^{0}_{ii}(s)=C_{ii}(s)-\nu_{i}^{2}\,, (7.37)

with limsCii0(s)=0\lim_{s\to\infty}C_{ii}^{0}(s)=0. The Fourier transform of Eq. (7.37) yields

C^ii(ω)=C^ii0(ω)+2πνi2δ(ω).\hat{C}_{ii}(\omega)=\hat{C}_{ii}^{0}(\omega)+2\pi\nu_{i}^{2}\,\delta(\omega)\,. (7.38)

Thus C^ii(ω)\hat{C}_{ii}(\omega) diverges at ω=0\omega=0; the divergence is removed by switching to the normalized autocorrelation. In the following we will calculate the noise spectrum C^ii(ω)\hat{C}_{ii}(\omega) for ω0\omega\neq 0.

A B
Fig. 7.12: A. The autocorrelation of a spike train describes the chance to find two spikes at a distance ss, independent of the number of spikes that occur in between. B. Fourier transform of the autocorrelation function CiiC_{ii} of a Poisson neuron with absolute refractoriness (Δax=5\Delta^{\rm ax}=5\,ms) and constant stimulation (ν=100\nu=100 Hz).

In the case of a stationary renewal process, the autocorrelation function is closely related to the interval distribution P0(s)P_{0}(s). This relation will now be derived. Let us suppose that we have found a first spike at tt. To calculate the autocorrelation we need the probability density for a spike at t+st+s. Let us construct an expression for Cii(s)C_{ii}(s) for s>0s>0. The correlation function for positive ss will be denoted by νiC+(s)\nu_{i}\,C_{+}(s) or

C+(s)=1νiCii(s)Θ(s).C_{+}(s)={1\over\nu_{i}}\,C_{ii}(s)\,\Theta(s)\,. (7.39)

The factor νi\nu_{i} in Eq. (7.39) takes care of the fact that we expect a first spike at tt with rate νi\nu_{i}. C+(s)C_{+}(s) gives the conditional probability density that, given a spike at tt, we will find another spike at t+s>tt+s>t. The spike at t+st+s can be the first spike after tt, or the second one, or the nnth one; see Fig. 7.12. Thus for s>0s>0

C+(s)\displaystyle C_{+}(s) =\displaystyle= P0(s)+0P0(s)P0(s-s)ds\displaystyle P_{0}(s)+\int_{0}^{\infty}P_{0}(s^{\prime})\,P_{0}(s-s^{\prime})% \,{\text{d}}s^{\prime} (7.40)
+00P0(s)P0(s′′)P0(s-s-s′′)dsds′′+\displaystyle+\int_{0}^{\infty}\int_{0}^{\infty}P_{0}(s^{\prime})\,P_{0}(s^{% \prime\prime})\,P_{0}(s-s^{\prime}-s^{\prime\prime})\,{\text{d}}s^{\prime}\,{% \text{d}}s^{\prime\prime}+\dots

or

C+(s)=P0(s)+0P0(s)C+(s-s)dsC_{+}(s)=P_{0}(s)+\int_{0}^{\infty}P_{0}(s^{\prime})\,C_{+}(s-s^{\prime})\,{% \text{d}}s^{\prime}\, (7.41)

as can be seen by inserting Eq. (7.40) on the right-hand side of (7.41); see Fig. 7.13.

Due to the symmetry of CiiC_{ii}, we have Cii(s)=νC+(-s)C_{ii}(s)=\nu\,C_{+}(-s) for s<0s<0. Finally, for s=0s=0, the autocorrelation has a δ\delta peak reflecting the trivial autocorrelation of each spike with itself. Hence,

Cii(s)=νi[δ(s)+C+(s)+C+(-s)].C_{ii}(s)=\nu_{i}\left[\delta(s)+C_{+}(s)+C_{+}(-s)\right]\,. (7.42)

In order to solve Eq. (7.41) for C+C_{+} we take the Fourier transform of Eq. (7.41) and find

C^+(ω)=P^0(ω)1-P^0(ω).\hat{C}_{+}(\omega)={\hat{P}_{0}(\omega)\over 1-\hat{P}_{0}(\omega)}\,. (7.43)

Together with the Fourier transform of Eq. (7.42), C^ii=νi[1+2Re{C^+(ω)}]\hat{C}_{ii}=\nu_{i}\,[1+2\,\text{Re}\{\hat{C}_{+}(\omega)\}], we obtain

C^ii(ω)=νiRe{1+P^0(ω)1-P^0(ω)}forω0 .\hat{C}_{ii}(\omega)=\nu_{i}\,\text{Re}\left\{{1+\hat{P}_{0}(\omega)\over 1-% \hat{P}_{0}(\omega)}\right\}\quad{\rm for}\quad\omega\neq 0\,. (7.44)

For ω=0\omega=0, the Fourier integral over the right-hand side of Eq. (7.40) diverges, since 0P0(s)ds=1\int_{0}^{\infty}P_{0}(s){\text{d}}s=1. If we add the diverging term from Eq. (7.38), we arrive at

C^ii(ω)=νiRe{1+P^0(ω)1-P^0(ω)}+2πνi2δ(ω).\hat{C}_{ii}(\omega)=\nu_{i}\,\text{Re}\left\{{1+\hat{P}_{0}(\omega)\over 1-% \hat{P}_{0}(\omega)}\right\}+2\pi\,\nu_{i}^{2}\delta(\omega). (7.45)

This is a standard result of stationary renewal theory (105) which has been repeatedly applied to neuronal spike trains (137; 36).

Fig. 7.13: The autocorrelation is a sum of interspike interval distributions convolved with itself (graphical representation of Eq. (7.40)). The interspike interval distribtion (P0(s)P_{0}(s), thick line) is added sequentially to self-convolutions such as 0P0(s)P0(s-s)ds\int_{0}^{\infty}P_{0}(s^{\prime})P_{0}(s-s^{\prime})ds^{\prime} (dashed lines). The partial sums (full lines) gradually converge to the autocorrelation function (C+(s)C_{+}(s), dotted line).

Example: Stationary Poisson process

In Section 7.2.1 and Section 7.5 we have already discussed the Poisson neuron from the perspective of mean firing rate and renewal theory, respectively. The autocorrelation of a Poisson process is

Cii(s)=νδ(s)+ν2.C_{ii}(s)=\nu\,\delta(s)+\nu^{2}. (7.46)

We want to show that Eq. (7.46) follows from Eq. (7.40).

Since the interval distribution of a Poisson process is exponential [cf. Eq. (7.16) with Δabs=0\Delta^{\rm abs}=0], we can evaluate the integrals on the right-hand side of Eq. (7.40) in a straightforward manner. The result is

C+(s)=νe-νs[1+νs+12(νs)2+]=ν.C_{+}(s)=\nu\,e^{-\nu\,s}\big[1+\nu\,s+{1\over 2}(\nu\,s)^{2}+\dots\big]=\nu\,. (7.47)

Hence, with Eq. (7.42), we obtain the autocorrelation function (7.46) of a homogeneous Poisson process. The Fourier transform of Eq. (7.46) yields a flat spectrum with a δ\delta peak at zero:

C^ii(ω)=ν+2πν2δ(ω).\hat{C}_{ii}(\omega)=\nu+2\pi\,\nu^{2}\,\delta(\omega)\,. (7.48)

The result could have also been obtained by evaluating Eq. (7.45).

Example: Poisson process with absolute refractoriness

We return to the Poisson process with absolute refractoriness defined in Eq. (7.16). Apart from an absolute refractory time Δabs\Delta^{\rm abs}, the neuron fires with rate rr. For ω0\omega\neq 0, Eq. (7.45) yields the noise spectrum

C^ii(ω)=ν{1+2ν2ω2[1-cos(ωΔabs)]+2νωsin(ωΔabs)}-1,\hat{C}_{ii}(\omega)=\nu\,\left\{1+2\,{\nu^{2}\over\omega^{2}}\,[1-\cos(\omega% \,\Delta^{\rm abs})]+2{\nu\over\omega}\,\sin(\omega\,\Delta^{\rm abs})\right\}% ^{-1}\,, (7.49)

cf. Fig. (7.12)B. In contrast to the stationary Poisson process Eq. (7.46), the noise spectrum of a neuron with absolute refractoriness Δabs>0\Delta^{\rm abs}>0 is no longer flat. In particular, for ω0\omega\to 0, the noise spectrum is decreased by a factor [1+2(νΔabs)+(νΔabs)2]-1[1+2(\nu\,\Delta^{\rm abs})+(\nu\,\Delta^{\rm abs})^{2}]^{-1}. Eq. (7.49) and generalizations thereof have been used to fit the power spectrum of, e.g., auditory neurons (137) and MT neurons (36).

Can we understand the decrease in the noise spectrum for ω0\omega\to 0? The mean interval of a Poisson neuron with absolute refractoriness is s=Δabs+r-1\langle s\rangle=\Delta^{\rm abs}+r^{-1}. Hence the mean firing rate is

ν=r1+Δabsr.\nu={r\over 1+\Delta^{\rm abs}\,r}\,. (7.50)

For Δabs=0\Delta^{\rm abs}=0 we retrieve the stationary Poisson process Eq. (7.3) with ν=r\nu=r. For finite Δabs\Delta^{\rm abs} the firing is more regular than that of a Poisson process with the same mean rate ν\nu. We note that for finite Δabs>0\Delta^{\rm abs}>0, the mean firing rate remains bounded even if rr\to\infty. The neuron fires then regularly with period Δabs\Delta^{\rm abs}. Because the spike train of a neuron with refractoriness is more regular than that of a Poisson neuron with the same mean rate, the spike count over a long interval, and hence the spectrum for ω0\omega\to 0, is less noisy. This means that Poisson neurons with absolute refractoriness can transmit slow signals more reliably than a simple Poisson process.

7.5.4 Input dependent renewal theory (*)

It is possible to use the renewal concept in a broader sense and define a renewal process as a system where the state at time tt (and hence the probability of generating an event at tt), depends both on the time that has passed since the last event (i.e., the firing time t^\hat{t}) and the input I(t)I(t^{\prime}), t^<t<t\hat{t}<t^{\prime}<t, that the system received since the last event. Input-dependent renewal systems are also called modulated renewal processes (425), non-stationary renewal systems (182; 183), or inhomogeneous Markov interval processes (251). The aim of a theory of input-dependent renewal systems is to predict the probability density

PI(t|t^)P_{I}(t|\hat{t}) (7.51)

of the next event to occur at time tt, given the timing t^\hat{t} of the last event and the input I(t)I(t^{\prime}) for t^<t<t\hat{t}<t^{\prime}<t; see Fig. 7.14. The relation between hazard, survivor function, and interval distribution for the input-dependent case is the same as the one given in Equations (7.25) – (7.28). The generalization to a time-dependent renewal theory will be useful later on in Chapter 9.

The lower index II of PI(t|t^)P_{I}(t|\hat{t}) is intended to remind the reader that the probability density PI(t|t^)P_{I}(t|\hat{t}) depends on the time course of the input I(t)I(t^{\prime}) for t<tt^{\prime}<t. Since PI(t|t^)P_{I}(t|\hat{t}) is conditioned on the spike at t^\hat{t}, it can be called a spike-triggered spike density. We interpret PI(t|t^)P_{I}(t\,|\,\hat{t}) as the distribution of interspike intervals in the presence of an input current II or as the input-dependent interval distribution. For stationary input, PI(t|t^)P_{I}(t|\hat{t}) reduces to P0(t-t^)P_{0}(t-\hat{t}).

A B
Fig. 7.14: Input-dependent interval distribution. A. A neuron, stimulated by the current I(t)I(t) has emitted a first spike at t^\hat{t}. B. The interval distribution PI(t|t^)P_{I}(t|\hat{t}) gives the probability density that the next spike occurs after an interval t-t^t-\hat{t}.