Neuronal Dynamics
online book
From single neurons to networks and models of cognition
Wulfram Gerstner, Werner M. Kistler, Richard Naud and Liam Paninski
Copyright notice
© Cambridge University Press. This online book is in copyright. No reproduction of any part of it may take place without the written permission of Cambridge University Press.
First-edition Errata
Errors found in the first edition are collected in the
First-edition Errata section
and corrected in the online version. Should you find any errors, please write to
Wulfram Gerstner
.
I
Foundations of Neuronal Dynamics
Toggle navigation
Neuronal Dynamics
About
Online book
Python Exercises
Video Lectures
Teaching Material
Table of Contents
I
Foundations of Neuronal Dynamics
1
Introduction
1.1
Elements of Neuronal Systems
1.1.1
The Ideal Spiking Neuron
1.1.2
Spike Trains
1.1.3
Synapses
1.1.4
Neurons are part of a big system
1.2
Elements of Neuronal Dynamics
1.2.1
Postsynaptic Potentials
1.2.2
Firing Threshold and Action Potential
1.3
Integrate-And-Fire Models
1.3.1
Integration of Inputs
1.3.2
Pulse Input
1.3.3
The Threshold for Spike Firing
1.3.4
Time-dependent Input (*)
2
2
Sections marked by an asterisk are mathematically more advanced and can be omitted during a first reading of the book.
1.3.5
Linear Differential Equation vs. Linear Filter: Two Equivalent Pictures (*)
1.3.6
Periodic drive and Fourier transform (*)
1.4
Limitations of the Leaky Integrate-and-Fire Model
1.4.1
Adaptation, Bursting, and Inhibitory Rebound
1.4.2
Shunting Inhibition and Reversal Potential
1.4.3
Conductance Changes after a Spike
1.4.4
Spatial Structure
1.5
What Can We Expect from Integrate-And-Fire Models?
1.6
Summary
Literature
Exercises
2
The Hodgkin-Huxley Model
2.1
Equilibrium potential
2.1.1
Nernst potential
2.1.2
Reversal Potential
2.2
Hodgkin-Huxley Model
2.2.1
Definition of the model
2.2.2
Stochastic Channel Opening
2.2.3
Dynamics
2.3
The Zoo of Ion Channels
2.3.1
Framework for biophysical neuron models
2.3.2
Sodium Ion Channels and the Type-I Regime
2.3.3
Adaptation and Refractoriness
2.3.4
Subthreshold Effects
2.3.5
Calcium spikes and postinhibitory rebound
2.4
Summary
Literature
Exercises
3
Dendrites and Synapses
3.1
Synapses
3.1.1
Inhibitory Synapses
3.1.2
Excitatory Synapses
3.1.3
Rapid synaptic dynamics
3.2
Spatial Structure: The Dendritic Tree
3.2.1
Derivation of the Cable Equation
3.2.2
Green’s Function of the passive Cable
3.2.3
Non-linear Extensions to the Cable Equation
3.3
Spatial Structure: Axons
3.3.1
Unmyelinated Axons
3.3.2
Myelinated Axons
3.4
Compartmental Models
3.5
Summary
Literature
Exercises
4
Dimensionality Reduction and Phase Plane Analysis
4.1
Threshold effects
4.1.1
Pulse Input
4.1.2
Step Current Input
4.2
Reduction to two dimensions
4.2.1
General approach
4.2.2
Mathematical steps (*)
4.3
Phase plane analysis
4.3.1
Nullclines
4.3.2
Stability of Fixed Points
4.4
Type I and Type II neuron models
4.4.1
Type I Models and Saddle-Node-onto-Limit-Cycle Bifurcation
4.4.2
Type II Models and Saddle-Node-off-Limit-Cycle Bifurcation
4.4.3
Type II Models and Hopf Bifurcation
4.5
Threshold and excitability
4.5.1
Type I models
4.5.2
Hopf Bifurcations
4.6
Separation of time scales and reduction to one dimension
4.7
Summary
Literature
Exercises
II
Generalized Integrate-and-Fire Neurons
5
Nonlinear Integrate-and-Fire Models
5.1
Thresholds in a nonlinear integrate-and-fire model
5.1.1
Where is the firing threshold?
5.1.2
Detour: Analysis of One-Dimensional Differential Equations
5.2
Exponential Integrate-and-Fire Model
5.2.1
Extracting the Nonlinearity from Data
5.2.2
From Hodgkin-Huxley to Exponential Integrate-and-Fire
5.3
Quadratic Integrate and Fire
5.3.1
Canonical Type I model (*)
5.4
Summary
Literature
Exercises
6
Adaptation and Firing Patterns
6.1
Adaptive Exponential Integrate-and-Fire
6.2
Firing Patterns
6.2.1
Classification of Firing Patterns
6.2.2
Phase plane analysis of non-linear integrate-and-fire models in two dimensions
6.2.3
Exploring the Space of Reset Parameters
6.2.4
Exploring the Space of Subthreshold Parameters
6.3
Biophysical Origin of Adaptation
6.3.1
Subthreshold adaptation by a single slow channel
6.3.2
Spike-triggered adaptation arising from a biophysical ion-channel
6.3.3
Subthreshold adaptation caused by passive dendrites
6.4
Spike response model (SRM)
6.4.1
Definition of the SRM
6.4.2
Interpretation of
η
\eta
and
κ
\kappa
6.4.3
Mapping the Integrate-and-Fire Model to the SRM
6.4.4
Multi-compartment integrate-and-fire model as a SRM (*)
6.5
Summary
Literature
Exercises
7
Variability of Spike Trains and Neural Codes
7.1
Spike train variability
7.1.1
Are neurons noisy?
7.1.2
Noise from the Network
7.2
Mean Firing Rate
7.2.1
Rate as a Spike Count and Fano Factor
7.2.2
Rate as a Spike Density and the Peri-Stimulus-Time Histogram
7.2.3
Rate as a Population Activity (Average over Several Neurons)
7.3
Interval distribution and coefficient of variation
7.3.1
Coefficient of variation
C
V
C_{V}
7.4
Autocorrelation function and noise spectrum
7.5
Renewal statistics
7.5.1
Survivor function and hazard
7.5.2
Renewal theory and experiments
7.5.3
Autocorrelation and noise spectrum of a renewal process (*)
7.5.4
Input dependent renewal theory (*)
7.6
The Problem of Neural Coding
7.6.1
Limits of rate codes
7.6.2
Candidate temporal codes
7.7
Summary
Literature
8
Noisy Input Models: Barrage of Spike Arrivals
8.1
Noise input
8.1.1
White noise
8.1.2
Noisy versus noiseless membrane potential
8.1.3
Colored Noise
8.2
Stochastic spike arrival
8.2.1
Membrane potential fluctuations caused by spike arrivals
8.2.2
Balanced excitation and inhibition
8.3
Subthreshold vs. Superthreshold regime
8.4
Diffusion limit and Fokker-Planck equation(*)
8.4.1
Threshold and firing
8.4.2
Interval distribution for the diffusive noise model
8.4.3
Mean interval and mean firing rate (diffusive noise)
8.5
Summary
Literature
Exercises
9
Noisy Output: Escape Rate and Soft Threshold
9.1
Escape noise
9.1.1
Escape rate
9.1.2
Transition from continuous time to discrete time
9.2
Likelihood of a spike train
9.3
Renewal Approximation of the Spike Response Model
9.4
From noisy inputs to escape noise
9.4.1
Leaky integrate-and-fire model with noisy input
9.4.2
Stochastic resonance
9.5
Summary
Literature
Exercises
10
Estimating Models
10.1
Parameter optimization in linear and nonlinear models
10.1.1
Linear Models
10.1.2
Generalized Linear Models
10.2
Statistical Formulation of Encoding Models
10.2.1
Parameter estimation
10.2.2
Regularization: maximum penalized likelihood
10.2.3
Fitting Generalized Integrate-and-Fire models to Data
10.2.4
Extensions (*)
10.3
Evaluating Goodness-of-fit
10.3.1
Comparing Spiking Membrane Potential Recordings
10.3.2
Spike Train Likelihood
10.3.3
Time-rescaling Theorem
10.3.4
Spike Train Metric
10.3.5
Comparing Sets of Spike Trains
10.4
Closed-loop stimulus design
10.5
Summary
Literature
Exercises
11
Encoding and Decoding with Stochastic Neuron models
11.1
Encoding Models for Intracellular Recordings
11.1.1
Predicting Membrane Potential
11.1.2
Predicting Spikes
11.1.3
How good are generalized integrate-and-fire models?
11.2
Encoding Models in Systems Neuroscience
11.2.1
Receptive fields and Linear-Nonlinear Poisson Model
11.2.2
Multiple Neurons
11.3
Decoding
11.3.1
Maximum
a posteriori
decoding
11.3.2
Assessing decoding uncertainty (*)
11.3.3
Decoding in vision and neuroprosthetics
11.4
Summary
Literature
Exercises
III
Networks of Neurons and Population Activity
12
Neuronal Populations
12.1
Columnar organization
12.1.1
Receptive fields
12.1.2
How many populations?
12.1.3
Distributed assemblies
12.2
Identical Neurons: A Mathematical Abstraction
12.2.1
Homogeneous networks
12.2.2
Heterogeneous networks
12.3
Connectivity Schemes
12.3.1
Full connectivity
12.3.2
Random coupling: Fixed coupling probability
12.3.3
Random coupling: Fixed number of presynaptic partners
12.3.4
Balanced excitation and inhibition
12.3.5
Interacting Populations
12.3.6
Distance dependent connectivity
12.3.7
Spatial Continuum Limit (*)
12.4
From Microscopic to Macroscopic
12.4.1
Stationary activity and asynchronous firing
12.4.2
Stationary Activity as Single-Neuron Firing Rate
12.4.3
Activity of a fully connected network
12.4.4
Activity of a randomly connected network
12.4.5
Apparent stochasticity and chaos in a deterministic network
12.5
Summary
Literature
Exercises
13
Continuity Equation and the Fokker-Planck Approach
13.1
Continuity equation
13.1.1
Distribution of membrane potentials
13.1.2
Flux and continuity equation
13.2
Stochastic spike arrival
13.2.1
Jumps of membrane potential due to stochastic spike arrival
13.2.2
Drift of membrane potential
13.2.3
Population activity
13.2.4
Single neuron versus population of neurons
13.3
Fokker-Planck equation
13.3.1
Stationary solution for leaky integrate-and-fire neurons(*)
13.4
Networks of leaky integrate-and-fire neurons
13.4.1
Multiple populations
13.4.2
Synchrony, oscillations, and irregularity
13.5
Networks of Nonlinear Integrate-and-Fire Neurons
13.5.1
Steady state population activity
13.5.2
Response to modulated input (*)
13.6
Neuronal adaptation and synaptic conductance
13.6.1
Adaptation currents
13.6.2
Embedding in a network
13.6.3
Conductance input vs. current input
13.6.4
Colored Noise (*)
13.7
Summary
Literature
Exercises
14
The Integral-equation Approach
14.1
Population activity equations
14.1.1
Assumptions of time-dependent renewal theory
14.1.2
Integral equations for non-adaptive neurons
14.1.3
Normalization and derivation of the integral equation
14.1.4
Integral equation for adaptive neurons
14.1.5
Numerical methods for integral equations (*)
14.2
Recurrent Networks and Interacting Populations
14.2.1
Several populations and networks with self-interaction
14.2.2
Stationary states and fixed points of activity
14.2.3
Oscillations and stability of the stationary state (*)
14.3
Linear response to time-dependent input
14.3.1
Derivation of the linear response filter (*)
14.4
Density equations vs. integral equations
14.4.1
Refractory densities
14.4.2
From refractory densities to the integral equation (*)
14.4.3
From refractory densities to membrane potential densities (*)
14.5
Adaptation in Population Equations
14.5.1
Quasi-Renewal Theory (*)
14.5.2
Event-based Moment Expansion (*)
14.6
Heterogeneity and Finite Size
Finite number of neurons (*)
14.7
Summary
Literature
Exercises
15
Fast Transients and Rate Models
15.1
How fast are population responses?
15.2
Fast transients vs. slow transients in models
15.2.1
Fast transients for low noise or ‘slow’ noise
15.2.2
Populations of neurons with escape noise
15.2.3
Populations of neurons with diffusive noise
15.3
Rate models
15.3.1
Rate models have slow transients
15.3.2
Networks of rate models
15.3.3
Linear-Nonlinear-Poisson and improved transients
15.3.4
Adaptation
15.4
Summary
Literature
Exercises
IV
Dynamics of Cognition
16
Competing Populations and Decision Making
16.1
Perceptual Decision Making
16.1.1
Perception of motion
16.1.2
Where is the decision taken?
16.2
Competition through common inhibition
16.3
Dynamics of decision making
16.3.1
Model with three populations
16.3.2
Effective inhibition
16.3.3
Phase plane analysis
16.3.4
Formal winner-take-all networks
16.4
Alternative Decision Models
16.4.1
The energy picture
16.4.2
Drift-Diffusion Model
16.5
Human Decisions, Determinism, and Free Will
16.5.1
The Libet experiment
16.5.2
Relevant and irrelevant decisions - a critique
16.6
Summary
Literature
Exercises
17
Memory and Attractor Dynamics
17.1
Associations and memory
17.1.1
Recall, recognition, and partial information
17.1.2
Neuronal assemblies
17.1.3
Working memory and delayed matching-to-sample tasks
17.2
Hopfield Model
17.2.1
Detour: Magnetic analogy
17.2.2
Patterns in the Hopfield model
17.2.3
Pattern retrieval
17.2.4
Memory capacity
17.2.5
The energy picture
17.2.6
Retrieval of low-activity patterns
17.3
Memory networks with spiking neurons
17.3.1
Activity of spiking networks
17.3.2
Excitatory and inhibitory neurons
17.4
Summary
Literature
Exercises
18
Cortical Field Models for Perception
18.1
Spatial continuum model
18.1.1
Mexican-hat coupling
18.2
Input-driven regime and sensory cortex models
18.2.1
Homogeneous solutions
18.2.2
Stability of homogeneous states (*)
18.2.3
Contrast enhancement
18.2.4
Inhibition, surround suppression, and cortex models
18.3
Bump attractors and spontaneous pattern formation
18.3.1
‘Blobs’ of activity: inhomogeneous states
18.3.2
Sense of Orientation and Head Direction Cells
18.4
Summary
Exercises
19
Synaptic Plasticity and Learning
19.1
Hebb rule and experiments
19.1.1
Long-Term Potentiation
19.1.2
Spike-timing-dependent plasticity
19.2
Models of Hebbian learning
19.2.1
A Mathematical Formulation of Hebb’s Rule
19.2.2
Pair-based Models of STDP
19.2.3
Generalized STDP models
19.3
Unsupervised learning
19.3.1
Competitive learning
19.3.2
Learning equations for rate models
19.3.3
Learning equations for STDP models (*)
19.4
Reward-based learning
19.5
Summary
Literature
Exercises
20
Outlook: Dynamics in Plastic Networks
20.1
Reservoir computing
20.1.1
Rich dynamics
20.1.2
Network analysis (*)
20.2
Oscillations: good or bad?
20.2.1
Synchronous Oscillations and Locking
20.2.2
Oscillations with irregular firing
20.2.3
Phase Models
20.2.4
Synaptic plasticity and oscillations
20.3
Helping Patients
20.4
Summary
Literature
Bibliography
Index
First Edition Errata