It is helpful to break neural data analysis into two basic problems. The ‘encoding’ problem concerns how information is encoded in neural spike trains: can we predict the spike trains of a neuron (or population of neurons), given an arbitrary synaptic input, current injection, or sensory stimulus? Conversely, the ‘decoding’ problem concerns how much we can learn from the observation of a sequence of spikes: in particular, how well can we estimate the stimulus that gave rise to the spike train?
The problems of encoding and decoding are difficult both because neural responses are stochastic and because we want to identify these response properties given any possible stimulus in some very large set (e.g., all images that might occur in the world), and there are typically too many such stimuli than we can hope to sample by brute force. Thus the neural coding problem is fundamentally statistical: given a finite number of samples of noisy physiological data, how do we estimate, in a global sense, the neural codebook?
This basic question has taken on a new urgency as neurophysiological recordings allow us to peer into the brain with ever greater facility: with the development of fast computers, inexpensive memory, and large-scale multineuronal recording and high-resolution imaging techniques, it has become feasible to directly observe and analyze neural activity at a level of detail that was impossible in the 20th century. Experimentalists now routinely record from hundreds of neurons simultaneously, providing great challenges for data analysis by computational neuroscientists and statisticians. Indeed, it has become clear that sophisticated statistical techniques are necessary to understand the neural code: many of the key questions cannot be answered without powerful statistical tools.
This chapter describes statistical model-based techniques that provide a unified approach to both encoding and decoding. These statistical models can capture stimulus dependencies as well as spike history and inter-neuronal interaction effects in population of spike trains, and are intimately related to the generalized integrate-and-fire models discussed in the previous chapters.
In Section 10.1, we establish the notation that enables us to identify relevant model parameters and introduce the concept of parameter optimization in a linear model. We then leave the realm of linear models and turn to the models that we have discussed in preceding chapters (e.g. the Spike Response Model with escape noise in Ch. 9) where spike generation is stochastic and nonlinear.
In Section 10.2, we describe the same neuron models of spike trains in the slightly more abstract language of statistics. The likelihood of a spike train given the stimulus plays a central role in statistical models of encoding. As we have seen in Chapter 9, the stochasticity introduced by ‘escape noise’ in the Spike Response Model (SRM) or other generalized integrate-and-fire models enables us to write down the likelihood that an observed spike train could have been generated by the neuron model. Likelihood-based optimization methods for fitting these neuron models to data allow us to predict neuronal spike timing for future, unknown stimuli. Thus, the SRM and other generalized integrate-and-fire models can be viewed as encoding models. In Chapter 11 we will see that the same models can also be used to perform optimal decoding.
The emphasis of this chapter is on likelihood-based methods for model optimization. The likelihood-based optimization methods are computationally tractable, due to a key concavity property of the model likelihood (383). However, likelihood is just one of several quantities that can be chosen to compare spike trains, and other measures to quantify the performance of models can also be used. In Section 10.3 we review different performance measures for the ‘goodness of fit’ of a model. In particular, we present the notion of ‘spike train similarity’ and the ‘time rescaling theorem’ (73).
Finally, in Section 10.4 we apply the ideas developed in this chapter to adaptively choose the optimal stimuli for characterizing the response function.
© Cambridge University Press. This book is in copyright. No reproduction of any part of it may take place without the written permission of Cambridge University Press.