18 Cortical Field Models for Perception

18.2 Input-driven regime and sensory cortex models

In this section we study the field equation ( 18.4 ) in the input-driven regime. Thus, if the input is spatially uniform, the activity pattern is also spatially uniform. From a mathematical perspective, the spatially uniform activity pattern is the homogeneous solution of the field equation (Subsection 18.2.1 ). The stability of the homogeneous solution is discussed in subsection 18.2.2 .

A non-trivial spatial structure in the input gives rise to deviations from the homogeneous solution. Thus the input drives the formation of spatial activity patterns. This regime can account for perceptual phenomena such as contrast enhancement as shown in subsection 18.2.3 . Finally, we discuss how the effective Mexican-hat interaction, necessary for contrast enhancement, could be implemented in cortex with local inhibition (Subsection 18.2.4 ).

18.2.1 Homogeneous solutions

Although we have kept the above model as simple as possible, the field equation ( 18.4 ) is complicated enough to prevent comprehensive analytical treatment. We therefore start our investigation by looking for a special type of solution, i.e., a solution that is uniform over space, but not necessarily constant over time. We call this the homogeneous solution and write h(x,t)h(t)h(x,t)\equiv h(t) . We expect that a homogeneous solution exists if the external input is homogeneous as well, i.e., if I   ext   (x,t)I   ext   (t)I^{\text{ext}}(x,t)\equiv I^{\text{ext}}(t) .

Substitution of the ansatz h(x,t)h(t)h(x,t)\equiv h(t) into Eq. ( 18.4 ) yields

τ   d   h(t)   d   t=-h(t)+w¯F[h(t)]+I   ext   (t).\tau\,\frac{{\text{d}}h(t)}{{\text{d}}t}=-h(t)+\bar{w}\,F[h(t)]\,+I^{\text{ext% }}(t)\,. (18.5)

with w¯=   d   yw(|y|)\bar{w}=\int{\text{d}}y\;w\left(\left|y\right|\right) . This is a nonlinear ordinary differential equation for the average input potential h(t)h(t) . We note that the equation for the homogeneous solution is identical to that of a single population without spatial structure; cf. Ch. 15 .

The fixed points of the above equation with Iext=0I^{\rm ext}=0 are of particular interest because they correspond to a resting state of the network. More generally, we search for stationary solutions for a given constant external input I   ext   (x,t)I   ext   I^{\text{ext}}(x,t)\equiv I^{\text{ext}} . The fixed points of Eq. ( 18.5 ) are solutions of

F(h)=h-I   ext   w¯,F(h)=\frac{h-I^{\text{ext}}}{\bar{w}}\,, (18.6)

which is represented graphically in Fig.  18.5 .

Depending on the strength of the external input three qualitatively different situations can be observed. For low external stimulation there is a single fixed point at a very low level of neuronal activity. This corresponds to a quiescent state where the activity of the whole network has ceased. Large stimulation results in a fixed point at an almost saturated level of activity which corresponds to a state where all neurons are firing at their maximum rate. Intermediate values of external stimulation, however, may result in a situation with more than one fixed point. Depending on the shape of the output function and the mean synaptic coupling strength w¯\bar{w} three fixed points may appear. Two of them correspond to the quiescent and the highly activated state, respectively, which are separated by the third fixed point at an intermediate level of activity.

Fig. 18.5: Graphical representation of the fixed-point equation (18.6). The solid line corresponds to the neuronal gain function F(h)F(h) and the dashed lines to (h-Iext)/w¯(h-I^{\text{ext}})/\bar{w} for different amounts of external stimulation IextI^{\text{ext}}. Depending on the amount of IextI^{\text{ext}} there is either a stable fixed point at low activity (ii), a stable fixed point at high activity (vv), or a bistable situation with stable fixed points (iiii-iviv) separated by an unstable fixed point at intermediate level of activity (iiiiii).

Any potential physical relevance of fixed points clearly depends on their stability. Stability under the dynamics defined by the ordinary differential equation Eq. ( 18.5 ) is readily checked using standard analysis. Stability requires that at the intersection

F(h)<w¯-1.F^{\prime}(h)<\bar{w}^{-1}\,. (18.7)

Thus all fixed points corresponding to quiescent or highly activated states are stable whereas the middle fixed point in case of multiple solutions is unstable; cf. Fig.  18.5 . This, however, is only half of the truth because Eq. ( 18.5 ) only describes homogeneous solutions. Therefore, it may well be that the solutions are stable with respect to Eq. ( 18.5 ), but unstable with respect to inhomogeneous perturbations, i.e., to perturbations that do not have the same amplitude everywhere in the net.

18.2.2 Stability of homogeneous states (*)

In the following we will perform a linear stability analysis of the homogeneous solutions found in the previous section. Readers not interested in the mathematical details can jump directly to section 18.2.3 .

We study the field equation ( 18.4 ) and consider small perturbations about the homogeneous solution. A linearization of the field equation will lead to a linear differential equation for the amplitude of the perturbation. The homogeneous solution is said to be stable if the amplitude of every small perturbation is decreasing whatever its shape.

A B
Fig. 18.6: A. Synaptic coupling function with zero mean as in Eq. (18.14) with σ1=1\sigma_{1}=1 and σ2=10\sigma_{2}=10. B. Fourier transform of the coupling function shown in A; cf. Eq. (18.16).

Suppose h(x,t)h0h(x,t)\equiv h_{0} is a homogeneous solution of Eq. ( 18.4 ), i.e.,

0=-h0+   d   yw(|x-y|)F[h0]+I   ext   .0=-h_{0}+\int\!\!{\text{d}}y\;w\left(\left|x-y\right|\right)\,F[h_{0}]+I^{% \text{ext}}\,. (18.8)

Consider a small perturbation δh(x,t)\delta h(x,t) with initial amplitude |δh(x,0)|1\left|\delta h(x,0)\right|\ll 1 . We substitute h(x,t)=h0+δh(x,t)h(x,t)=h_{0}+\delta h(x,t) in Eq. ( 18.4 ) and linearize with respect to δh\delta h ,

τtδh(x,t)=-h0-δh(x,t)+   d   yw(|x-y|)[F(h0)+F(h0)δh(y,t)]+I   ext   (x,t)+𝒪(δh2).\tau\,\frac{\partial}{\partial t}\delta h(x,t)=-h_{0}-\delta h(x,t)\\ +\int\!\!{\text{d}}y\;w(|x-y|)\,[F(h_{0})+F^{\prime}(h_{0})\,\delta h(y,t)]+I^% {\text{ext}}(x,t)+{\mathcal{O}}(\delta h^{2})\,. (18.9)

Here, a prime denotes the derivative with respect to the argument. Zero-order terms cancel each other because of Eq. ( 18.8 ). If we collect all terms linear in δh\delta h we find

τtδh(x,t)=-δh(x,t)+F(h0)   d   yw(|x-y|)δh(y,t).\tau\,\frac{\partial}{\partial t}\delta h(x,t)=-\delta h(x,t)+F^{\prime}(h_{0}% )\,\int\!\!{\text{d}}y\;w(|x-y|)\,\delta h(y,t)\,. (18.10)

We make two important observations. First, Eq. ( 18.10 ) is linear in the perturbations δh\delta h – simply because we have neglected terms of order (δh)n(\delta h)^{n} with n2n\geq 2 . Second, the coupling between neurons at locations xx and yy is mediated by the coupling kernel w(|x-y|)w\left(\left|x-y\right|\right) that depends only on the distance |x-y||x-y| . If we apply a Fourier transform over the spatial coordinates, the convolution integral turns into a simple multiplication. It suffices therefore to discuss a single (spatial) Fourier component of δh(x,t)\delta h(x,t) . Any specific initial form of δh(x,0)\delta h(x,0) can be created from its Fourier components by virtue of the superposition principle. We can therefore proceed without loss of generality by considering a single Fourier component, viz., δh(x,t)=c(t)   e   ikx\delta h(x,t)=c(t)\,{\text{e}}^{i\,k\,x} . If we substitute this ansatz in Eq. ( 18.10 ) we obtain

τc(t)\displaystyle\tau\,c^{\prime}(t) =-c(t)[1-F(h0)   d   yw(|x-y|)   e   ik(y-x)]\displaystyle=-c(t)\left[1-F^{\prime}(h_{0})\,\int\!\!{\text{d}}y\;w(|x-y|)\,{% \text{e}}^{i\,k\,(y-x)}\right]
=-c(t)[1-F(h0)   d   zw(|z|)   e   ikz],\displaystyle=-c(t)\left[1-F^{\prime}(h_{0})\,\int\!\!{\text{d}}z\;w(|z|)\,{% \text{e}}^{i\,k\,z}\right]\,, (18.11)

which is a linear differential equation for the amplitude cc of a perturbation with wave number kk . This equation is solved by

c(t)=c(0)   e   -κ(k)t,c(t)=c(0)\,{\text{e}}^{-\kappa(k)\,t}\,, (18.12)

with

κ(k)=1-F(h0)   d   zw(|z|)   e   ikz.\kappa(k)=1-F^{\prime}(h_{0})\,\int\!\!{\text{d}}z\;w(|z|)\,{\text{e}}^{i\,k\,% z}\,. (18.13)

Stability of the solution h0h_{0} with respect to a perturbation with wave number kk depends on the sign of the real part of κ(k)\kappa(k) . Note that – quite intuitively – only two quantities enter this expression, namely the slope of the activation function evaluated at h0h_{0} and the Fourier transform of the coupling function ww evaluated at kk . If the real part of the Fourier transform of ww stays below 1/g(h0)1/g^{\prime}(h_{0}) , then h0h_{0} is stable. Note that Eqs. ( 18.12 ) and ( 18.13 ) are valid for an arbitrary coupling function w(|x-y|)w(|x-y|) . In the following, we illustrate the typical behavior for a specific choice of the lateral coupling.

A B
Fig. 18.7: A. Gain function F(h)={1+exp[β(x-θ)]}-1F(h)=\{1+\exp[\beta(x-\theta)]\}^{-1} with β=5\beta=5 and θ=1\theta=1. The dashing indicates that part of the graph where the slope exceeds the critical slope ss^{\ast}. B. Derivative of the gain function shown in A (solid line) and critical slope ss^{\ast} (dashed line).

Example: ‘Mexican-hat’ coupling with zero mean

We describe Mexican-hat coupling by a combination of two bell-shaped functions with different width. For the sake of simplicity we will again consider a one-dimensional sheet of neurons. For the lateral coupling we take

w(x)=σ2   e   -x2/(2σ12)-σ1   e   -x2/(2σ22)σ2-σ1,w(x)=\frac{\sigma_{2}\,{\text{e}}^{-x^{2}/(2\sigma_{1}^{2})}-\sigma_{1}\,{% \text{e}}^{-x^{2}/(2\sigma_{2}^{2})}}{\sigma_{2}-\sigma_{1}}\,, (18.14)

with σ1<σ2\sigma_{1}<\sigma_{2} . The normalization of the coupling function has been chosen so that w(0)=1w(0)=1 and    d   xw(x)=w¯=0\int{\text{d}}x\;w(x)=\bar{w}=0 ; cf Fig.  18.6 A.

As a first step we search for a homogeneous solution. If we substitute h(x,t)=h(t)h(x,t)=h(t) in Eq. ( 18.4 ) we find

τ   d   h(t)   d   t=-h(t)+I   ext   .\tau\frac{{\text{d}}h(t)}{{\text{d}}t}=-h(t)+I^{\text{ext}}\,. (18.15)

The term containing the integral drops out because of w¯=0\bar{w}=0 . This differential equation has a single stable fixed point at h0=I   ext   h_{0}=I^{\text{ext}} . This situation corresponds to the graphical solution of Fig.  18.5 with the dashed lines replaced by vertical lines (‘infinite slope’).

We still have to check the stability of the homogeneous solution h(x,t)=h0h(x,t)=h_{0} with respect to inhomogeneous perturbations. In the present case, the Fourier transform of ww ,

   d   xw(x)   e   ikx=2πσ1σ2σ2-σ1(   e   -k2σ12/2-   e   -k2σ22/2),\int\!\!{\text{d}}x\,w(x)\,{\text{e}}^{i\,k\,x}=\frac{\sqrt{2\pi}\,\sigma_{1}% \,\sigma_{2}}{\sigma_{2}-\sigma_{1}}\left({\text{e}}^{-k^{2}\,\sigma_{1}^{2}/2% }-{\text{e}}^{-k^{2}\,\sigma_{2}^{2}/2}\right)\,, (18.16)

vanishes at k=0k=0 and has its maximum at

km=±[2ln(σ22/σ12)σ22-σ12]1/2.k_{m}=\pm\left[\frac{2\,\ln(\sigma_{2}^{2}/\sigma_{1}^{2})}{\sigma_{2}^{2}-% \sigma_{1}^{2}}\right]^{1/2}\,. (18.17)

At the maximum, the amplitude of the Fourier transform has a value of

w^m=maxk   d   xw(x)   e   ikx=2πσ1σ2σ2-σ1[(σ12σ22)σ12σ22-σ12-(σ12σ22)σ22σ22-σ12],\hat{w}_{m}=\max_{k}\int\!\!{\text{d}}x\,w(x)\,{\text{e}}^{i\,k\,x}=\frac{% \sqrt{2\pi}\,\sigma_{1}\,\sigma_{2}}{\sigma_{2}-\sigma_{1}}\left[\left(\frac{% \sigma_{1}^{2}}{\sigma_{2}^{2}}\right)^{\frac{\sigma_{1}^{2}}{\sigma_{2}^{2}-% \sigma_{1}^{2}}}-\left(\frac{\sigma_{1}^{2}}{\sigma_{2}^{2}}\right)^{\frac{% \sigma_{2}^{2}}{\sigma_{2}^{2}-\sigma_{1}^{2}}}\right]\,, (18.18)

cf. Fig.  18.6 B. We use this result in Eqs. ( 18.12 ) and ( 18.13 ) and conclude that stable homogeneous solutions can only be found for those parts of the graph of the output function F(h)F(h) where the slope s=g(h)s=g^{\prime}(h) does not exceed the critical value s=1/w^ms^{\ast}=1/\hat{w}_{m} ,

s=σ2-σ12πσ1σ2[(σ12σ22)σ12σ22-σ12-(σ12σ22)σ22σ22-σ12]-1.s^{\ast}=\frac{\sigma_{2}-\sigma_{1}}{\sqrt{2\pi}\,\sigma_{1}\,\sigma_{2}}% \left[\left(\frac{\sigma_{1}^{2}}{\sigma_{2}^{2}}\right)^{\frac{\sigma_{1}^{2}% }{\sigma_{2}^{2}-\sigma_{1}^{2}}}-\left(\frac{\sigma_{1}^{2}}{\sigma_{2}^{2}}% \right)^{\frac{\sigma_{2}^{2}}{\sigma_{2}^{2}-\sigma_{1}^{2}}}\right]^{-1}\,. (18.19)

Figure Fig.  18.6 and Fig.  18.7 shows that depending on the choice of coupling ww and gain functions FF a certain interval for the external input exists without a corresponding stable homogeneous solution. In this parameter domain a phenomenon called pattern formation can be observed: Small fluctuations around the homogeneous state grow exponentially until a characteristic pattern of regions with low and high activity has developed; cf. Fig.  18.8 .

A B
C D
Fig. 18.8: Spontaneous pattern formation in a one-dimensional sheet of neurons with ‘Mexican-hat’ type of interaction and homogeneous external stimulation. The parameters for the coupling function and the output function are the same as in Fig. 18.6-18.7. The graphs show the evolution in time of the spatial distribution of the average membrane potential h(x,t)h(x,t). A. For Iext=0.4I^{\text{ext}}=0.4 the homogeneous low-activity state is stable, but it looses stability at Iext=0.6I^{\text{ext}}=0.6 (B). Here, small initial fluctuations in the membrane potential grow exponentially and result in a global pattern of regions with high and low activity. C. Similar situation as in B, but with Iext=1.4I^{\text{ext}}=1.4. D. Finally, at Iext=1.6I^{\text{ext}}=1.6 the homogeneous high-activity mode is stable.

18.2.3 Contrast enhancement

Mach described over one hundred years ago the psychophysical phenomenon of edge enhancement or contrast enhancement ( 315 ) : The sharp transition between two regions of different intensities generates perceptual bands along the borders that enhance the perceived intensity difference (Fig. 18.2 A). Edge enhancement is already initiated in the retina ( 314 ) , but likely to have cortical components as well.

Field models with a Mexican-hat interaction kernel generically generate contrast enhancement in the input driven regime (Fig. 18.9 A). Because of the nonlinear lateral interactions, an incoming spatial input pattern is transformed ( 553; 199 ) . For example, a spatial input with rectangular profile boosts activity at the borders, while a smooth input with sinusoidal modulation across space boosts activity at the maximum (Fig. 18.9 B). A spatial input with a staircase intensity profile generates activity patterns that resemble the perceptual phenomenon of Mach-bands.

A B
Fig. 18.9: A. Mach bands in a field model with mexican hat coupling. Reflecting Fig. 18.2A, the external current Iext(x,t)=Iext(x)I^{\rm ext}(x,t)=I^{\rm ext}(x) forms a staircase as a function of distance. The resulting activity F(h(t))F(h(t)) is shown for four different times. The equilibrium solution is indicated by a thick line. B. An implementation of a field model of excitatory and inhibitory spiking neurons, stimulated with a sinusoidal spatial profile (dashed line) generates a peak at the maximum; taken from (474).

Example: An application to orientation selectivity in V1

Continuum models can represent not only spatial position profiles, but also more abstract variables. For example, ring models have been used to describe orientation selectivity of neurons in the visual cortex ( 45; 207; 474 ) .

As discussed in Chapter 12 , cells in the primary visual cortex (V1) respond preferentially to lines or bars that have a certain orientation within the visual field. There are neurons that ‘prefer’ vertical bars, others respond maximally to bars with a different orientation ( 232 ) . Up to now it is still a matter of debate where this orientation selectivity does come from. It may be the result of the wiring of the input to the visual cortex, i.e., the wiring of the projections from the LGN to V1, or it may result from intra-cortical connections, i.e., from the wiring of the neurons within V1, or both. Here we will investigate the extent to which intra-cortical projections can contribute to orientation selectivity.

We consider a network of neurons forming a so-called hyper column. These are neurons with receptive fields which correspond to roughly the same zone in the visual field but with different preferred orientations. The orientation of a bar at a given position within the visual field can thus be coded faithfully by the population activity of the neurons from the corresponding hyper column.

Instead of using spatial coordinates to identify a neuron in the cortex, we label the neurons in this section by their preferred orientation θ\theta which may vary from -π/2-\pi/2 to +π/2+\pi/2 . In doing so we assume that the preferred orientation is indeed a good “name tag” for each neuron so that the synaptic coupling strength can be given in terms of the preferred orientations of pre- and post synaptic neuron. Following the formalism developed in the previous sections, we assume that the synaptic coupling strength ww of neurons with preferred orientation θ\theta and θ\theta^{\prime} is a symmetric function of the difference θ-θ\theta-\theta^{\prime} , i.e., w=w(|θ-θ|)w=w(|\theta-\theta^{\prime}|) . Since we are dealing with angles from [-π/2,+π/2][-\pi/2,+\pi/2] it is natural to assume that all functions are π\pi -periodic so that we can use Fourier series to characterize them. Non-trivial results are obtained even if we retain only the first two Fourier components of the coupling function,

w(θ-θ)=w0+w2cos[2(θ-θ)].w(\theta-\theta^{\prime})=w_{0}+w_{2}\,\cos[2(\theta-\theta^{\prime})]\,. (18.20)

Similarly to the intra-cortical projections we take the (stationary) external input from the LGN as a function of the difference of the preferred orientation θ\theta and the orientation of the stimulus θ0\theta_{0} ,

I   ext   (θ)=c0+c2cos[2(θ-θ0)].I^{\text{ext}}(\theta)=c_{0}+c_{2}\,\cos[2(\theta-\theta_{0})]\,. (18.21)

Here, c0c_{0} is the mean of the input and c2c_{2} describes the modulation of the input that arises from anisotropies in the projections from the LGN to V1.

In analogy to Eq. ( 18.4 ) the field equation for the present setup has thus the form

τh(θ,t)t=-h(θ,t)+-π/2+π/2   d   θπw(|θ-θ|)F[h(θ,t)]+I   ext   (θ).\tau\,\frac{\partial h(\theta,t)}{\partial t}=-h(\theta,t)+\int_{-\pi/2}^{+\pi% /2}\frac{{\text{d}}\theta^{\prime}}{\pi}\;w(|\theta-\theta^{\prime}|)\,F[h(% \theta^{\prime},t)]+I^{\text{ext}}(\theta)\,. (18.22)

We are interested in the distribution of the neuronal activity within the hyper column as it arises from a stationary external stimulus with orientation θ0\theta_{0} . This will allow us to study the role of intra-cortical projections in sharpening orientation selectivity.

In order to obtain conclusive results we have to specify the form of the gain function FF . A particularly simple case is the piecewise linear function,

F(h)=[h]+{h,h00,h<0.F(h)=[h]_{+}\equiv\begin{cases}h\,,&h\geq 0\\ 0\,,&h<0\end{cases}\,. (18.23)

so that neuronal firing increases linearly monotonously once the input potential exceeds a certain threshold.

If we assume that the average input potential h(θ,t)h(\theta,t) is always above threshold, then we can replace the gain function FF in Eq. ( 18.22 ) by the identity function. We are thus left with the following linear equation for the stationary distribution of the average membrane potential,

h(θ)=-π/2+π/2   d   θπw(|θ-θ|)h(θ)+I   ext   (θ).h(\theta)=\int_{-\pi/2}^{+\pi/2}\frac{{\text{d}}\theta^{\prime}}{\pi}\;w(|% \theta-\theta^{\prime}|)\,h(\theta^{\prime})+I^{\text{ext}}(\theta)\,. (18.24)

This equation is solved by

h(θ)=h0+h2cos[2(θ-θ0)],h(\theta)=h_{0}+h_{2}\,\cos[2(\theta-\theta_{0})]\,, (18.25)

with

h0=c01-w0   and   h2=2c22-w2.h_{0}=\frac{c_{0}}{1-w_{0}}\quad\text{and}\quad h_{2}=\frac{2\,c_{2}}{2-w_{2}}\,. (18.26)

As a result of the intra-cortical projections, the modulation h2h_{2} of the response of the neurons from the hyper column is thus amplified by a factor 2/(2-w2)2/(2-w_{2}) as compared to the modulation of the input c2c_{2} .

In deriving Eq. ( 18.24 ) we have assumed that hh stays always above threshold so that we have an additional condition, viz., h0-|h2|>0h_{0}-|h_{2}|>0 , in order to obtain a self-consistent solution. This condition may be violated depending on the stimulus. In that case the above solution is no longer valid and we have to take the nonlinearity of the gain function into account ( 45 ) , i.e., we have to replace Eq. ( 18.24 ) by

h(θ)=θ0-θcθ0+θc   d   θπw(|θ-θ|)h(θ)+I   ext   (θ).h(\theta)=\int_{\theta_{0}-\theta_{c}}^{\theta_{0}+\theta_{c}}\frac{{\text{d}}% \theta^{\prime}}{\pi}\;w(|\theta-\theta^{\prime}|)\,h(\theta^{\prime})+I^{% \text{ext}}(\theta)\,. (18.27)

Here, θ0±θc\theta_{0}\pm\theta_{c} are the cutoff angles that define the interval where h(θ)h(\theta) is positive. If we use ( 18.25 ) in the above equation, we obtain together with h(θ0±θc)=0h(\theta_{0}\pm\theta_{c})=0 a set of equations that can be solved for h0h_{0} , h2h_{2} , and θc\theta_{c} . Figure 18.10 shows two examples of the resulting activity profiles F[h(θ)]F[h(\theta)] for different modulation depths of the input.

Throughout this example we have described neuronal populations in terms of an averaged input potential and the corresponding firing rate. At least for stationary input and a high level of noise this is indeed a good approximation of the dynamics of spiking neurons. Figure 18.11 shows two examples of a simulation based on SRM 0{}_{0} neurons with escape noise and a network architecture that is equivalent to what we have used above. The stationary activity profiles shown in Fig. 18.11 for a network of spiking neurons are qualitatively similar to those of Fig.  18.10 derived for a rate-based model. For low levels of noise, however, the description of spiking networks in terms of a firing rate is no longer valid, because the state of asynchronous firing becomes unstable (cf. Section  14.2.3 ) and neurons tend to synchronize ( 284 ) .

18.2.4 Inhibition, surround suppression, and cortex models

There are several concerns when writing down a standard field model such as Eq. ( 18.4 ) with Mexican-hat interaction. In this section, we aim at moving field models closer to biology and consider three of these concerns.

A - Does Mexican-hat connectivity exist in cortex ? The Mexican-hat interaction pattern has a long tradition in theoretical neuroscience ( 553; 199; 271 ) , but, from a biological perspective, it has two major shortcomings. First, in field models with Mexican-hat interaction, the same presynaptic population gives rise to both excitation and inhibition whereas in cortex excitation and inhibition require separate groups of neuron (Dale’s law). Second, inhibition in Mexican-hat connectivity is of longer range than excitation whereas biological data suggests the opposite. In fact, inhibitory neurons are sometimes called local interneurons because they only make local interactions. Pyramidal cells, however, make long-range connections within and beyond cortical areas.

B - Are there electrophysiological correlates of contrast enhancement? Simple and complex cells in visual cortex respond best if they are stimulated by a slowly moving grating with optimal orientation and of a size that is matched to the cells receptive field; cf. Chapter 12 . If the grating is optimally oriented but larger than the receptive field, the response is reduced compared to that of a smaller grating (Fig. 18.12 ). At first sight, this finding is consistent with contrast enhancement through Mexican-hat interaction: a uniform large stimulus evokes a smaller response because it generates inhibition from neurons which are further apart. Paradoxically, however, neurons receive less inhibition (Fig. 18.13 ) with the larger stimulus than with the smaller one ( 374 ) .

A B
Fig. 18.10: Activity profiles (solid line) that result from stationary external stimulation (dashed line) in a model of orientation selectivity. A. Weak modulation (c0=0.8c_{0}=0.8, c2=0.2c_{2}=0.2) of the external input results in a broad activity profile; cf. Eq (18.24). B. Strong modulation (c0=0.6c_{0}=0.6, c2=0.4c_{2}=0.4) produces a narrow profile; cf. Eq. (18.27). Other parameters are ω0=0\omega_{0}=0, ω2=1\omega_{2}=1, θ0=0\theta_{0}=0.
A B
Fig. 18.11: Activity profiles in a model of orientation selectivity obtained by simulations based on SRM0{}_{0} neurons (dots) compared to the theoretical prediction (solid line) during stimulation with a low-contrast orientation input at θ=0\theta=0. A. If lateral coupling is not distance-dependent, [ω2=0\omega_{2}=0; cf. Eq. (18.20)] the activity profile reflects the weak modulation of the input pattern. B. Excitatory coupling between cells of the same orientation and long-range inhibition (ω2=10\omega_{2}=10) generates a sharp activity profile centered at θ=0\theta=0 [Taken from Spiridon and Gerstner (492)].
A B
Fig. 18.12: Surround suppression. A. Schematic. A1. Firing rate of a V1 cell as a function of the size of a moving grid stimulus. The grid has optimal orientation and optimal line spacing. Larger grids cause weaker responses than smaller ones. A2. Heuristic interpretation of surround suppression. The feedforward pathway from LGN to a cell (arrow, bottom row) gives rise to a small receptive field (RF size and location indicated above cell). Neighboring neurons with overlapping receptive fields excite each other and can be grouped into a local population (dashed circle). If the size of the stimulus is slightly larger, the response of the recorded neuron (middle) is enhanced because of excitatory input from neighboring cells. Right: Distal neurons inhibit the central neuron. Therefore an even larger stimulus suppresses the firing rate of the recorded neuron. B. Experimental data. A moving grating causes a modulation of the membrane potential and spike firing. The number of spikes and the membrane potential are larger for a small grating than for a bigger one. Dashed horizontal line: mean membrane potential in the absence of stimulation; taken from (374).

C - How can we interpret the ’position’ variable in field models? In the previous sections we varied the interpretation of the ’space’ variable from physical position in cortex to an abstract variable representing the preferred orientation of cells in primary visual cortex. Indeed, in visual cortex several variables need to be encoded in parallel: the location of a neuron’s receptive field and its preferred orientation and potentially its preferred color and potentially the relative importance of input from left and right eye, respectively, - while each neuron also has a physical location in cortex. Therefore a distance-dependent connectivity pattern needs to be distance dependent for several dimensions in parallel while respecting the physical properties of a nearly two-dimensional cortical sheet.

In the following, we present a model by Ozeki et al. ( 374 ) that addresses concerns A and B and enables us to comment on point C.

A B
C
Fig. 18.13: Network stabilized by local inhibition. The schematic model could potentially explain why larger gratings lead not only to less excitatory input gexcg_{\rm exc}, but also to less inhibitory input ginhg_{\rm inh}. A. The firing rate as a function of the phase of the moving grating for the three stimulus conditions (blank screen, small and large grating). B.Top: Excitatory input into the cell. Bottom: Inhibitory input into the same cell. As in A, left, middle and right correspond to a blank screen, a small grating and or a large grating. Note that the larger grating leads to a reduction of both excitation and inhibition; adapted from (374). C. Network model with long range excitation and local inhibition. Excitatory neurons within a local population excite themselves (feedback arrow), and also send excitatory input to inhibitory cells (downward arrows). Inhibitory neurons project to local excitatory neurons.

We group neurons with overlapping receptive fields of similar orientation preference (Fig. 18.12 A) into a single population. Inside the population neurons excite each other. We imagine that we record from a neuron in the center of the population. Neurons with receptive fields far away from the recorded neuron inhibit its activity.

Inhibition is implemented indirectly as indicated in Fig. 18.13 C. The excitatory neurons in the central population project onto a group of local inhibitory interneurons, but also onto populations of other inhibitory neurons further apart. Each population of inhibitory neurons makes only local connections to the excitatory population in their neighborhood. Input to the central group of excitatory neurons therefore induces indirect inhibition of excitatory neurons further apart. Such a network architecture therefore addresses concern A.

In order to address concern B, the network parameters are set such that the network is in the inhibition-stabilized regime. A network is said to be inhibition-stabilized if the positive feedback through recurrent connections within an excitatory population is strong enough to cause run-away activity in the absence of inhibition. To counterbalance the positive excitatory feedback, inhibition needs to be even stronger ( 524 ) . As a result, an inhibition-stabilized network responds to a positive external stimulation of inhibitory neurons with a decrease of both excitatory and inhibitory activity (see Exercises).

If the coupling from excitatory populations to neighboring inhibitory populations is stronger than that to neighboring excitatory populations, an inhibition-stabilized network can explain the phenomenon of surround suppression and at the same time account for the fact that during surround suppression both inhibitory and excitatory drive are reduced (Fig. 18.13 B). Such a network architecture therefore addresses concern B ( 374 ) .

In the above simplified model we focused on populations of neurons with the same preferred orientation, say vertical. However, in the same region of cortex, there are also neurons with other preferred orientations, such as diagonal or horizontal. The surround suppression effect is much weaker if the stimulus in the surround has a different orientation than in the central region. We therefore conclude that the cortical connectivity pattern does not simply depend on the physical distance between two neurons, but also on the difference in preferred orientation as well on the neuron type, layer etc. Therefore, for generalized field models of primary visual cortex the coupling from a neuron jj with receptive field center xjx_{j} to a neuron ii with receptive field center xix_{i} could be written as

wij=w(xi,xj,θi,θj,typei,typej,layeri,layerj)w_{ij}=w(x_{i},x_{j},\theta_{i},\theta_{j},type_{i},type_{j},layer_{i},layer_{% j}) (18.28)

where typetype refers to the type of neuron (e.g., pyramidal, fast-spiking interneuron, non-fast spiking interneuron) and layerlayer to the vertical position of the neurons in the cortical sheet. Other variables should be added to account for color preference, binocular preference etc.