In the previous sections we have developed robust and tractable approaches to understand neural encoding, based on GLMs, and quantifying the performance of models. The framework we have developed is ultimately data-driven; both our encoding and decoding methods fail if the observed data do not sufficiently constrain our encoding model parameters . Therefore we will close by describing how to take advantage of the properties of the GLM to optimize our experiments: the objective is to select, in an online, closed-loop manner, the stimuli that will most efficiently characterize the neuron’s response properties.
An important property of GLMs is that not all stimuli will provide the same amount of information about the unknown coefficients . As a concrete example, we can typically learn much more about a visual neuron’s response properties if we place stimulus energy within the receptive field, rather than ‘wasting’ stimulus energy outside the receptive field. To make this idea more rigorous and generally applicable, we need a well-defined objective function that will rank any given stimulus according to its potential informativeness. Numerous objective functions have been proposed for quantifying the utility of different stimuli (319; 362; 316). When the goal is estimating the unknown parameters of a model, it makes sense to choose stimuli which will on average reduce the uncertainty in the parameters as quickly as possible (as in the game of 20 questions), given , the observed data up to the current trial. This posterior uncertainty in can be quantified using the information-theoretic notion of ‘entropy’; see (104; 319; 384) for further details.
In general, information-theoretic quantities such as the entropy can be difficult to compute and optimize in high-dimensional spaces. However, (296) shows that the special structure of the GLM can be exploited (along with a Gaussian approximation to ) to obtain a surprisingly efficient procedure for choosing stimuli optimally in many cases. Indeed, a closed-loop optimization procedure leads to much more efficient experiments than does the standard open-loop approach of stimulating the cell with randomly-chosen stimuli that are not optimized adaptively for the neuron under study.
A common argument against online stimulus optimization is that neurons are highly adaptive: a stimulus which might be optimal for a given neuron in a quiescent state may quickly become suboptimal due to adaptation (in the form of short- and long-term synaptic plasticity, slow network dynamics, etc.). Including spike-history terms in the GLM allows us to incorporate some forms of adaptation (particularly those due to intrinsic processes including, e.g., sodium channel inactivation and calcium-activated potassium channels), and these spike history effects may be easily incorporated into the derivation of the optimal stimulus (296). However, extending these results to models with more profound sources of adaptation is an important open research direction; see (296; 128) for further discussion.
© Cambridge University Press. This book is in copyright. No reproduction of any part of it may take place without the written permission of Cambridge University Press.