Predictive Coding


How recent history affects perception: the normative approach and its heuristic approximation

O. Raviv and M. Ahissar and Y. Loewenstein

PLoS Comput Biol  8  e1002731  (2012)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=23133343

There is accumulating evidence that prior knowledge about expectations plays an important role in perception. The Bayesian framework is the standard computational approach to explain how prior knowledge about the distribution of expected stimuli is incorporated with noisy observations in order to improve performance. However, it is unclear what information about the prior distribution is acquired by the perceptual system over short periods of time and how this information is utilized in the process of perceptual decision making. Here we address this question using a simple two-tone discrimination task. We find that the "contraction bias", in which small magnitudes are overestimated and large magnitudes are underestimated, dominates the pattern of responses of human participants. This contraction bias is consistent with the Bayesian hypothesis in which the true prior information is available to the decision-maker. However, a trial-by-trial analysis of the pattern of responses reveals that the contribution of most recent trials to performance is overweighted compared with the predictions of a standard Bayesian model. Moreover, we study participants' performance in a-typical distributions of stimuli and demonstrate substantial deviations from the ideal Bayesian detector, suggesting that the brain utilizes a heuristic approximation of the Bayesian inference. We propose a biologically plausible model, in which decision in the two-tone discrimination task is based on a comparison between the second tone and an exponentially-decaying average of the first tone and past tones. We show that this model accounts for both the contraction bias and the deviations from the ideal Bayesian detector hypothesis. These findings demonstrate the power of Bayesian-like heuristics in the brain, as well as their limitations in their failure to fully adapt to novel environments.



Top-down effects on early visual processing in humans: a predictive coding framework

K. Rauss and S. Schwartz and G. Pourtois

Neurosci Biobehav Rev  35  1237-53  (2011)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=21185860

An increasing number of human electroencephalography (EEG) studies examining the earliest component of the visual evoked potential, the so-called C1, have cast doubts on the previously prevalent notion that this component is impermeable to top-down effects. This article reviews the original studies that (i) described the C1, (ii) linked it to primary visual cortex (V1) activity, and (iii) suggested that its electrophysiological characteristics are exclusively determined by low-level stimulus attributes, particularly the spatial position of the stimulus within the visual field. We then describe conflicting evidence from animal studies and human neuroimaging experiments and provide an overview of recent EEG and magnetoencephalography (MEG) work showing that initial V1 activity in humans may be strongly modulated by higher-level cognitive factors. Finally, we formulate a theoretical framework for understanding top-down effects on early visual processing in terms of predictive coding.



Cross-modal prediction in speech perception

C. Sánchez-García and A. Alsius and J. T. Enns and S. Soto-Faraco

PLoS One  6  e25198  (2011)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=21998642

Speech perception often benefits from vision of the speaker's lip movements when they are available. One potential mechanism underlying this reported gain in perception arising from audio-visual integration is on-line prediction. In this study we address whether the preceding speech context in a single modality can improve audiovisual processing and whether this improvement is based on on-line information-transfer across sensory modalities. In the experiments presented here, during each trial, a speech fragment (context) presented in a single sensory modality (voice or lips) was immediately continued by an audiovisual target fragment. Participants made speeded judgments about whether voice and lips were in agreement in the target fragment. The leading single sensory context and the subsequent audiovisual target fragment could be continuous in either one modality only, both (context in one modality continues into both modalities in the target fragment) or neither modalities (i.e., discontinuous). The results showed quicker audiovisual matching responses when context was continuous with the target within either the visual or auditory channel (Experiment 1). Critically, prior visual context also provided an advantage when it was cross-modally continuous (with the auditory channel in the target), but auditory to visual cross-modal continuity resulted in no advantage (Experiment 2). This suggests that visual speech information can provide an on-line benefit for processing the upcoming auditory input through the use of predictive mechanisms. We hypothesize that this benefit is expressed at an early level of speech analysis.



Visual cortex combines a stimulus and an error-like signal with a proportion that is dependent on time, space, and stimulus contrast

D. Eriksson and T. Wunderle and K. Schmidt

Front Syst Neurosci  6  26  (2012)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=22539918

Even though the visual cortex is one of the most studied brain areas, the neuronal code in this area is still not fully understood. In the literature, two codes are commonly hypothesized, namely stimulus and predictive (error) codes. Here, we examined whether and how these two codes can coexist in a neuron. To this end, we assumed that neurons could predict a constant stimulus across time or space, since this is the most fundamental type of prediction. Prediction was examined in time using electrophysiology and voltage-sensitive dye imaging in the supragranular layers in area 18 of the anesthetized cat, and in space using a computer model. The distinction into stimulus and error code was made by means of the orientation tuning of the recorded unit. The stimulus was constructed as such that a maximum response to the non-preferred orientation indicated an error signal, and the maximum response to the preferred orientation indicated a stimulus signal. We demonstrate that a single neuron combines stimulus and error-like coding. In addition, we observed that the duration of the error coding varies as a function of stimulus contrast. For low contrast the error-like coding was prolonged by around 60-100%. Finally, the combination of stimulus and error leads to a suboptimal free energy in a recent predictive coding model. We therefore suggest a straightforward modification that can be applied to the free energy model and other predictive coding models. Combining stimulus and error might be advantageous because the stimulus code enables a direct stimulus recognition that is free of assumptions whereas the error code enables an experience dependent inference of ambiguous and non-salient stimuli.



Unsupervised learning of generative and discriminative weights encoding elementary image components in a predictive coding model of cortical function

M. W. Spratling

Neural Comput  24  60-103  (2012)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=22023197

A method is presented for learning the reciprocal feedforward and feedback connections required by the predictive coding model of cortical function. When this method is used, feedforward and feedback connections are learned simultaneously and independently in a biologically plausible manner. The performance of the proposed algorithm is evaluated by applying it to learning the elementary components of artificial and natural images. For artificial images, the bars problem is employed, and the proposed algorithm is shown to produce state-of-the-art performance on this task. For natural images, components resembling Gabor functions are learned in the first processing stage, and neurons responsive to corners are learned in the second processing stage. The properties of these learned representations are in good agreement with neurophysiological data from V1 and V2. The proposed algorithm demonstrates for the first time that a single computational theory can explain the formation of cortical RFs and also the response properties of cortical neurons once those RFs have been learned.



Intrinsic mechanisms for adaptive gain rescaling in barrel cortex

M. Díaz-Quesada and M. Maravall

J Neurosci  28  696-710  (2008)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=18199769

Barrel cortex neuronal responses adapt to changes in the statistics of complex whisker stimuli. This form of adaptation involves an adjustment in the input-output tuning functions of the neurons, such that their gain rescales depending on the range of the current stimulus distribution. Similar phenomena have been observed in other sensory systems, suggesting that adaptive adjustment of responses to ongoing stimulus statistics is an important principle of sensory function. In other systems, adaptation and gain rescaling can depend on intrinsic properties; however, in barrel cortex, whether intrinsic mechanisms can contribute to adaptation to stimulus statistics is unknown. To examine this, we performed whole-cell patch-clamp recordings of pyramidal cells in acute slices while injecting stochastic current stimuli. We induced changes in statistical context by switching across stimulus distributions. The firing rates of neurons adapted in response to changes in stimulus statistics. Adaptation depended on the form of the changes in stimulus distribution: in vivo-like adaptation occurred only for rectified stimuli that maintained neurons in a persistent state of net depolarization. Under these conditions, neurons rescaled the gain of their input-output functions according to the scale of the stimulus distribution, as observed in vivo. This stimulus-specific adaptation was caused by intrinsic properties and correlated strongly with the amplitude of calcium-dependent slow afterhyperpolarizations. Our results suggest that widely expressed intrinsic mechanisms participate in barrel cortex adaptation but that their recruitment is highly stimulus specific.



Adaptation without parameter change: Dynamic gain control in motion detection

A. Borst and V. L. Flanagin and H. Sompolinsky

Proc Natl Acad Sci U S A  102  6172-6  (2005)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=15833815

Many sensory systems adapt their input-output relationship to changes in the statistics of the ambient stimulus. Such adaptive behavior has been measured in a motion detection sensitive neuron of the fly visual system, H1. The rapid adaptation of the velocity response gain has been interpreted as evidence of optimal matching of the H1 response to the dynamic range of the stimulus, thereby maximizing its information transmission. Here, we show that correlation-type motion detectors, which are commonly thought to underlie fly motion vision, intrinsically possess adaptive properties. Increasing the amplitude of the velocity fluctuations leads to a decrease of the effective gain and the time constant of the velocity response without any change in the parameters of these detectors. The seemingly complex property of this adaptation turns out to be a straightforward consequence of the multidimensionality of the stimulus and the nonlinear nature of the system.



Spatial vision in insects is facilitated by shaping the dynamics of visual input through behavioral action

M. Egelhaaf and N. Boeddeker and R. Kern and R. Kurtz and J. P. Lindemann

Front Neural Circuits  6  108  (2012)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=23269913

Insects such as flies or bees, with their miniature brains, are able to control highly aerobatic flight maneuvres and to solve spatial vision tasks, such as avoiding collisions with obstacles, landing on objects, or even localizing a previously learnt inconspicuous goal on the basis of environmental cues. With regard to solving such spatial tasks, these insects still outperform man-made autonomous flying systems. To accomplish their extraordinary performance, flies and bees have been shown by their characteristic behavioral actions to actively shape the dynamics of the image flow on their eyes ("optic flow"). The neural processing of information about the spatial layout of the environment is greatly facilitated by segregating the rotational from the translational optic flow component through a saccadic flight and gaze strategy. This active vision strategy thus enables the nervous system to solve apparently complex spatial vision tasks in a particularly efficient and parsimonious way. The key idea of this review is that biological agents, such as flies or bees, acquire at least part of their strength as autonomous systems through active interactions with their environment and not by simply processing passively gained information about the world. These agent-environment interactions lead to adaptive behavior in surroundings of a wide range of complexity. Animals with even tiny brains, such as insects, are capable of performing extraordinarily well in their behavioral contexts by making optimal use of the closed action-perception loop. Model simulations and robotic implementations show that the smart biological mechanisms of motion computation and visually-guided flight control might be helpful to find technical solutions, for example, when designing micro air vehicles carrying a miniaturized, low-weight on-board processor.



Two-dimensional adaptation in the auditory forebrain

T. O. Sharpee and K. I. Nagel and A. J. Doupe

J Neurophysiol  106  1841-61  (2011)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=21753019

Sensory neurons exhibit two universal properties: sensitivity to multiple stimulus dimensions, and adaptation to stimulus statistics. How adaptation affects encoding along primary dimensions is well characterized for most sensory pathways, but if and how it affects secondary dimensions is less clear. We studied these effects for neurons in the avian equivalent of primary auditory cortex, responding to temporally modulated sounds. We showed that the firing rate of single neurons in field L was affected by at least two components of the time-varying sound log-amplitude. When overall sound amplitude was low, neural responses were based on nonlinear combinations of the mean log-amplitude and its rate of change (first time differential). At high mean sound amplitude, the two relevant stimulus features became the first and second time derivatives of the sound log-amplitude. Thus a strikingly systematic relationship between dimensions was conserved across changes in stimulus intensity, whereby one of the relevant dimensions approximated the time differential of the other dimension. In contrast to stimulus mean, increases in stimulus variance did not change relevant dimensions, but selectively increased the contribution of the second dimension to neural firing, illustrating a new adaptive behavior enabled by multidimensional encoding. Finally, we demonstrated theoretically that inclusion of time differentials as additional stimulus features, as seen so prominently in the single-neuron responses studied here, is a useful strategy for encoding naturalistic stimuli, because it can lower the necessary sampling rate while maintaining the robustness of stimulus reconstruction to correlated noise.



Error-based analysis of optimal tuning functions explains phenomena observed in sensory neurons

S. Yaeli and R. Meir

Front Comput Neurosci  4  130  (2010)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=21079749

Biological systems display impressive capabilities in effectively responding to environmental signals in real time. There is increasing evidence that organisms may indeed be employing near optimal Bayesian calculations in their decision-making. An intriguing question relates to the properties of optimal encoding methods, namely determining the properties of neural populations in sensory layers that optimize performance, subject to physiological constraints. Within an ecological theory of neural encoding/decoding, we show that optimal Bayesian performance requires neural adaptation which reflects environmental changes. Specifically, we predict that neuronal tuning functions possess an optimal width, which increases with prior uncertainty and environmental noise, and decreases with the decoding time window. Furthermore, even for static stimuli, we demonstrate that dynamic sensory tuning functions, acting at relatively short time scales, lead to improved performance. Interestingly, the narrowing of tuning functions as a function of time was recently observed in several biological systems. Such results set the stage for a functional theory which may explain the high reliability of sensory systems, and the utility of neuronal adaptation occurring at multiple time scales.



Adaptation to stimulus statistics in the perception and neural representation of auditory space

J. C. Dahmen and P. Keating and F. R. Nodal and A. L. Schulz and A. J. King

Neuron  66  937-48  (2010)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=20620878

Sensory systems are known to adapt their coding strategies to the statistics of their environment, but little is still known about the perceptual implications of such adjustments. We investigated how auditory spatial processing adapts to stimulus statistics by presenting human listeners and anesthetized ferrets with noise sequences in which interaural level differences (ILD) rapidly fluctuated according to a Gaussian distribution. The mean of the distribution biased the perceived laterality of a subsequent stimulus, whereas the distribution's variance changed the listeners' spatial sensitivity. The responses of neurons in the inferior colliculus changed in line with these perceptual phenomena. Their ILD preference adjusted to match the stimulus distribution mean, resulting in large shifts in rate-ILD functions, while their gain adapted to the stimulus variance, producing pronounced changes in neural sensitivity. Our findings suggest that processing of auditory space is geared toward emphasizing relative spatial differences rather than the accurate representation of absolute position.



Heterogeneous response dynamics in retinal ganglion cells: the interplay of predictive coding and adaptation

S. Nirenberg and I. Bomash and J. W. Pillow and J. D. Victor

J Neurophysiol  103  3184-94  (2010)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=20357061

To make efficient use of their limited signaling capacity, sensory systems often use predictive coding. Predictive coding works by exploiting the statistical regularities of the environment--specifically, by filtering the sensory input to remove its predictable elements, thus enabling the neural signal to focus on what cannot be guessed. To do this, the neural filters must remove the environmental correlations. If predictive coding is to work well in multiple environments, sensory systems must adapt their filtering properties to fit each environment's statistics. Using the visual system as a model, we determine whether this happens. We compare retinal ganglion cell dynamics in two very different environments: white noise and natural. Because natural environments have more power than that of white noise at low temporal frequencies, predictive coding is expected to produce a suppression of low frequencies and an enhancement of high frequencies, compared with the behavior in a white-noise environment. We find that this holds, but only in part. First, predictive coding behavior is not uniform: most on cells manifest it, whereas off cells, on average, do not. Overlaid on this nonuniformity between cell classes is further nonuniformity within both cell classes. These findings indicate that functional considerations beyond predictive coding play an important role in shaping the dynamics of sensory adaptation. Moreover, the differences in behavior between on and off cell classes add to the growing evidence that these classes are not merely homogeneous mirror images of each other and suggest that their roles in visual processing are more complex than expected from the classic view.



Long-lasting context dependence constrains neural encoding models in rodent auditory cortex

H. Asari and A. M. Zador

J Neurophysiol  102  2638-56  (2009)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=19675288

Acoustic processing requires integration over time. We have used in vivo intracellular recording to measure neuronal integration times in anesthetized rats. Using natural sounds and other stimuli, we found that synaptic inputs to auditory cortical neurons showed a rather long context dependence, up to > or =4 s (tau approximately 1 s), even though sound-evoked excitatory and inhibitory conductances per se rarely lasted greater, similar 100 ms. Thalamic neurons showed only a much faster form of adaptation with a decay constant tau <100 ms, indicating that the long-lasting form originated from presynaptic mechanisms in the cortex, such as synaptic depression. Restricting knowledge of the stimulus history to only a few hundred milliseconds reduced the predictable response component to about half that of the optimal infinite-history model. Our results demonstrate the importance of long-range temporal effects in auditory cortex and suggest a potential neural substrate for auditory processing that requires integration over timescales of seconds or longer, such as stream segregation.



Adaptation accentuates responses of fly motion-sensitive visual neurons to sudden stimulus changes

R. Kurtz and M. Egelhaaf and H. G. Meyer and R. Kern

Proc Biol Sci  276  3711-9  (2009)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=19656791

Adaptation in sensory and neuronal systems usually leads to reduced responses to persistent or frequently presented stimuli. In contrast to simple fatigue, adapted neurons often retain their ability to encode changes in stimulus intensity and to respond when novel stimuli appear. We investigated how the level of adaptation of a fly visual motion-sensitive neuron affects its responses to discontinuities in the stimulus, i.e. sudden brief changes in one of the stimulus parameters (velocity, contrast, grating orientation and spatial frequency). Although the neuron's overall response decreased gradually during ongoing motion stimulation, the response transients elicited by stimulus discontinuities were preserved or even enhanced with adaptation. Moreover, the enhanced sensitivity to velocity changes by adaptation was not restricted to a certain velocity range, but was present regardless of whether the neuron was adapted to a baseline velocity below or above its steady-state velocity optimum. Our results suggest that motion adaptation helps motion-sensitive neurons to preserve their sensitivity to novel stimuli even in the presence of strong tonic stimulation, for example during self-motion.



Network adaptation improves temporal representation of naturalistic stimuli in Drosophila eye: I dynamics

L. Zheng and A. Nikolaev and T. J. Wardill and C. J. O'Kane and G. G. de Polavieja and M. Juusola

PLoS One  4  e4307  (2009)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=19180196

Because of the limited processing capacity of eyes, retinal networks must adapt constantly to best present the ever changing visual world to the brain. However, we still know little about how adaptation in retinal networks shapes neural encoding of changing information. To study this question, we recorded voltage responses from photoreceptors (R1-R6) and their output neurons (LMCs) in the Drosophila eye to repeated patterns of contrast values, collected from natural scenes. By analyzing the continuous photoreceptor-to-LMC transformations of these graded-potential neurons, we show that the efficiency of coding is dynamically improved by adaptation. In particular, adaptation enhances both the frequency and amplitude distribution of LMC output by improving sensitivity to under-represented signals within seconds. Moreover, the signal-to-noise ratio of LMC output increases in the same time scale. We suggest that these coding properties can be used to study network adaptation using the genetic tools in Drosophila, as shown in a companion paper (Part II).



Adaptive rescaling maximizes information transmission.

N. Brenner and W. Bialek and R. de Ruyter van Steveninck

Neuron  26  695-702  (2000)

/net/pooh/mnt/raid0/Reprints/PMID/10896164.pdf


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=Abstract&list_uids=10896164

Adaptation is a widespread phenomenon in nervous systems, providing flexibility to function under varying external conditions. Here, we relate an adaptive property of a sensory system directly to its function as a carrier of information about input signals. We show that the input/output relation of a sensory system in a dynamic environment changes with the statistical properties of the environment. Specifically, when the dynamic range of inputs changes, the input/output relation rescales so as to match the dynamic range of responses to that of the inputs. We give direct evidence that the scaling of the input/output relation is set to maximize information transmission for each distribution of signals. This adaptive behavior should be particularly useful in dealing with the intermittent statistics of natural signals.



Basal forebrain activation enhances cortical coding of natural scenes

M. Goard and Y. Dan

Nat Neurosci  12  1444-9  (2009)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=19801988

The nucleus basalis of the basal forebrain is an essential component of the neuromodulatory system controlling the behavioral state of an animal and it is thought to be important in regulating arousal and attention. However, the effect of nucleus basalis activation on sensory processing remains poorly understood. Using polytrode recording in rat visual cortex, we found that nucleus basalis stimulation caused prominent decorrelation between neurons and marked improvement in the reliability of neuronal responses to natural scenes. The decorrelation depended on local activation of cortical muscarinic acetylcholine receptors, whereas the increased reliability involved distributed neural circuits, as evidenced by nucleus basalis-induced changes in thalamic responses. Further analysis showed that the decorrelation and increased reliability improved cortical representation of natural stimuli in a complementary manner. Thus, the basal forebrain neuromodulatory circuit, which is known to be activated during aroused and attentive states, acts through both local and distributed mechanisms to improve sensory coding.



Activity recall in a visual cortical ensemble

S. Xu and W. Jiang and M.-M. Poo and Y. Dan

Nat Neurosci  15  449-55, S1-2  (2012)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=22267160

Cue-triggered recall of learned temporal sequences is an important cognitive function that has been attributed to higher brain areas. Here recordings in both anesthetized and awake rats demonstrate that after repeated stimulation with a moving spot that evoked sequential firing of an ensemble of primary visual cortex (V1) neurons, just a brief flash at the starting point of the motion path was sufficient to evoke a sequential firing pattern that reproduced the activation order evoked by the moving spot. The speed of recalled spike sequences may reflect the internal dynamics of the network rather than the motion speed. In awake rats, such recall was observed during a synchronized ('quiet wakeful') brain state having large-amplitude, low-frequency local field potential (LFP) but not in a desynchronized ('active') state having low-amplitude, high-frequency LFP. Such conditioning-enhanced, cue-evoked sequential spiking of a V1 ensemble may contribute to experience-based perceptual inference in a brain state-dependent manner.



Adaptive changes in neuronal synchronization in macaque V4

Y. Wang and B. F. Iliescu and J. Ma and K. Josić and V. Dragoi

J Neurosci  31  13204-13  (2011)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=21917803

A fundamental property of cortical neurons is the capacity to exhibit adaptive changes or plasticity. Whether adaptive changes in cortical responses are accompanied by changes in synchrony between individual neurons and local population activity in sensory cortex is unclear. This issue is important as synchronized neural activity is hypothesized to play an important role in propagating information in neuronal circuits. Here, we show that rapid adaptation (300 ms) to a stimulus of fixed orientation modulates the strength of oscillatory neuronal synchronization in macaque visual cortex (area V4) and influences the ability of neurons to distinguish small changes in stimulus orientation. Specifically, rapid adaptation increases the synchronization of individual neuronal responses with local population activity in the gamma frequency band (30-80 Hz). In contrast to previous reports that gamma synchronization is associated with an increase in firing rates in V4, we found that the postadaptation increase in gamma synchronization is associated with a decrease in neuronal responses. The increase in gamma-band synchronization after adaptation is functionally significant as it is correlated with an improvement in neuronal orientation discrimination performance. Thus, adaptive synchronization between the spiking activity of individual neurons and their local population can enhance temporally insensitive, rate-based-coding schemes for sensory discrimination.



Image sequence reactivation in awake V4 networks

S. L. Eagleman and V. Dragoi

Proc Natl Acad Sci U S A  109  19450-5  (2012)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=23129638

In the absence of sensory input, neuronal networks are far from being silent. Whether spontaneous changes in ongoing activity reflect previous sensory experience or stochastic fluctuations in brain activity is not well understood. Here we describe reactivation of stimulus-evoked activity in awake visual cortical networks. We found that continuous exposure to randomly flashed image sequences induces reactivation in macaque V4 cortical networks in the absence of visual stimulation. This reactivation of previously evoked activity is stimulus-specific, occurs only in the same temporal order as the original response, and strengthens with increased stimulus exposures. Importantly, cells exhibiting significant reactivation carry more information about the stimulus than cells that do not reactivate. These results demonstrate a surprising degree of experience-dependent plasticity in visual cortical networks as a result of repeated exposure to unattended information. We suggest that awake reactivation in visual cortex may underlie perceptual learning by passive stimulus exposure.



Adaptive coding of visual information in neural populations

D. A. Gutnisky and V. Dragoi

Nature  452  220-4  (2008)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=18337822

Our perception of the environment relies on the capacity of neural networks to adapt rapidly to changes in incoming stimuli. It is increasingly being realized that the neural code is adaptive, that is, sensory neurons change their responses and selectivity in a dynamic manner to match the changes in input stimuli. Understanding how rapid exposure, or adaptation, to a stimulus of fixed structure changes information processing by cortical networks is essential for understanding the relationship between sensory coding and behaviour. Physiological investigations of adaptation have contributed greatly to our understanding of how individual sensory neurons change their responses to influence stimulus coding, yet whether and how adaptation affects information coding in neural populations is unknown. Here we examine how brief adaptation (on the timescale of visual fixation) influences the structure of interneuronal correlations and the accuracy of population coding in the macaque (Macaca mulatta) primary visual cortex (V1). We find that brief adaptation to a stimulus of fixed structure reorganizes the distribution of correlations across the entire network by selectively reducing their mean and variability. The post-adaptation changes in neuronal correlations are associated with specific, stimulus-dependent changes in the efficiency of the population code, and are consistent with changes in perceptual performance after adaptation. Our results have implications beyond the predictions of current theories of sensory coding, suggesting that brief adaptation improves the accuracy of population coding to optimize neuronal performance during natural viewing.



Neural correlates of developing and adapting behavioral biases in speeded choice reactions--an fMRI study on predictive motor coding

S. B. Eickhoff and W. Pomjanski and O. Jakobs and K. Zilles and R. Langner

Cereb Cortex  21  1178-91  (2011)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=20956614

In reaction-time (RT) tasks with unequally probable stimuli, people respond faster and more accurately in high-probability trials than in low-probability trials. We used functional magnetic resonance imaging to investigate brain activity during the acquisition and adaptation of such biases. Participants responded to arrows pointing to either side with different and previously unknown probabilities across blocks, which were covertly reversed in the middle of some blocks. Changes in response bias were modeled using the development of the selective RT bias at the beginning of a block and after the reversal as parametric regressors. Both fresh development and reversal of an existing response bias were associated with bilateral activations in inferior parietal lobule, intraparietal sulcus, and supplementary motor cortex. Further activations were observed in right temporoparietal junction, dorsolateral prefrontal cortex, and dorsal premotor cortex. Only during initial development of biases at the beginning of a block, we observed additional activity in ventral premotor cortex and anterior insula, whereas the basal ganglia (bilaterally) were recruited when the bias was adapted to reversed probabilities. Taken together, these areas constitute a network that updates and applies implicit predictions to create an attention and motor bias according to environmental probabilities that transform into specific facilitation.



Modality-specific perceptual expectations selectively modulate baseline activity in auditory, somatosensory, and visual cortices

R. Langner and T. Kellermann and F. Boers and W. Sturm and K. Willmes and S. B. Eickhoff

Cereb Cortex  21  2850-62  (2011)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=21527785

Valid expectations are known to improve target detection, but the preparatory attentional mechanisms underlying this perceptual facilitation remain an open issue. Using functional magnetic resonance imaging, we show here that expecting auditory, tactile, or visual targets, in the absence of stimulation, selectively increased baseline activity in corresponding sensory cortices and decreased activity in irrelevant ones. Regardless of sensory modality, expectancy activated bilateral premotor and posterior parietal areas, supplementary motor area as well as right anterior insula and right middle frontal gyrus. The bilateral putamen was sensitive to the modality specificity of expectations during the unexpected omission of targets. Thus, across modalities, detection improvement arising from selectively directing attention to a sensory modality appears mediated through transient changes in pretarget activity. This flexible advance modulation of baseline activity in sensory cortices resolves ambiguities among previous studies unable to discriminate modality-specific preparatory activity from attentional modulation of stimulus processing. Our results agree with predictive-coding models, which suggest that these expectancy-related changes reflect top-down biases--presumably originating from the observed supramodal frontoparietal network--that modulate signal-detection sensitivity by differentially modifying background activity (i.e., noise level) in different input channels. The putamen appears to code omission-related Bayesian "surprise" that depends on the specificity of predictions.



Sensorimotor mismatch signals in primary visual cortex of the behaving mouse

G. B. Keller and T. Bonhoeffer and M. Hübener

Neuron  74  809-15  (2012)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=22681686

Studies in anesthetized animals have suggested that activity in early visual cortex is mainly driven by visual input and is well described by a feedforward processing hierarchy. However, evidence from experiments on awake animals has shown that both eye movements and behavioral state can strongly modulate responses of neurons in visual cortex; the functional significance of this modulation, however, remains elusive. Using visual-flow feedback manipulations during locomotion in a virtual reality environment, we found that responses in layer 2/3 of mouse primary visual cortex are strongly driven by locomotion and by mismatch between actual and expected visual feedback. These data suggest that processing in visual cortex may be based on predictive coding strategies that use motor-related and visual input to detect mismatches between predicted and actual visual feedback.



Thalamic synchrony and the adaptive gating of information flow to cortex

Q. Wang and R. M. Webber and G. B. Stanley

Nat Neurosci  13  1534-41  (2010)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=21102447

Although it has long been posited that sensory adaptation serves to enhance information flow in sensory pathways, the neural basis remains elusive. Simultaneous single-unit recordings in the thalamus and cortex in anesthetized rats showed that adaptation differentially influenced thalamus and cortex in a manner that fundamentally changed the nature of information conveyed about vibrissa motion. Using an ideal observer of cortical activity, we found that performance in detecting vibrissal deflections degraded with adaptation while performance in discriminating among vibrissal deflections of different velocities was enhanced, a trend not observed in thalamus. Analysis of simultaneously recorded thalamic neurons did reveal, however, an analogous adaptive change in thalamic synchrony that mirrored the cortical response. An integrate-and-fire model using experimentally measured thalamic input reproduced the observed transformations. The results here suggest a shift in coding strategy with adaptation that directly controls information relayed to cortex, which could have implications for encoding velocity signatures of textures.



Dynamic predictive coding by the retina

T. Hosoya and S. A. Baccus and M. Meister

Nature  436  71-7  (2005)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=16001064

Retinal ganglion cells convey the visual image from the eye to the brain. They generally encode local differences in space and changes in time rather than the raw image intensity. This can be seen as a strategy of predictive coding, adapted through evolution to the average image statistics of the natural environment. Yet animals encounter many environments with visual statistics different from the average scene. Here we show that when this happens, the retina adjusts its processing dynamically. The spatio-temporal receptive fields of retinal ganglion cells change after a few seconds in a new environment. The changes are adaptive, in that the new receptive field improves predictive coding under the new image statistics. We show that a network model with plastic synapses can account for the large variety of observed adaptations.



Adaptive coding is constrained to midline locations in a spatial listening task

J. K. Maier and P. Hehrmann and N. S. Harper and G. M. Klump and D. Pressnitzer and D. McAlpine

J Neurophysiol  108  1856-68  (2012)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=22773777

Many neurons adapt their spike output to accommodate the prevailing sensory environment. Although such adaptation is thought to improve coding of relevant stimulus features, the relationship between adaptation at the neural and behavioral levels remains to be established. Here we describe improved discrimination performance for an auditory spatial cue (interaural time differences, ITDs) following adaptation to stimulus statistics. Physiological recordings in the midbrain of anesthetized guinea pigs and measurement of discrimination performance in humans both demonstrate improved coding of the most prevalent ITDs in a distribution, but with highest accuracy maintained for ITDs corresponding to frontal locations, suggesting the existence of a fovea for auditory space. A biologically plausible model accounting for the physiological data suggests that neural tuning is stabilized by inhibition to maintain high discriminability for frontal locations. The data support the notion that adaptive coding in the midbrain is a key element of behaviorally efficient sound localization in dynamic acoustic environments.



Correlated input reveals coexisting coding schemes in a sensory cortex

L. Estebanez and S. E. Boustani and A. Destexhe and D. E. Shulz

Nat Neurosci  15  1691-9  (2012)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=23160042

As in other sensory modalities, one function of the somatosensory system is to detect coherence and contrast in the environment. To investigate the neural bases of these computations, we applied different spatiotemporal patterns of stimuli to rat whiskers while recording multiple neurons in the barrel cortex. Model-based analysis of the responses revealed different coding schemes according to the level of input correlation. With uncorrelated stimuli on 24 whiskers, we identified two distinct functional categories of neurons, analogous in the temporal domain to simple and complex cells of the primary visual cortex. With correlated stimuli, however, a complementary coding scheme emerged: two distinct cell populations, similar to reinforcing and antagonist neurons described in the higher visual area MT, responded specifically to correlations. We suggest that similar context-dependent coexisting coding strategies may be present in other sensory systems to adapt sensory integration to specific stimulus statistics.



Vocal learning is constrained by the statistics of sensorimotor experience

S. J. Sober and M. S. Brainard

Proc Natl Acad Sci U S A  109  21099-103  (2012)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=23213223

The brain uses sensory feedback to correct behavioral errors. Larger errors by definition require greater corrections, and many models of learning assume that larger sensory feedback errors drive larger motor changes. However, an alternative perspective is that larger errors drive learning less effectively because such errors fall outside the range of errors normally experienced and are therefore unlikely to reflect accurate feedback. This is especially crucial in vocal control because auditory feedback can be contaminated by environmental noise or sensory processing errors. A successful control strategy must therefore rely on feedback to correct errors while disregarding aberrant auditory signals that would lead to maladaptive vocal corrections. We hypothesized that these constraints result in compensation that is greatest for smaller imposed errors and least for larger errors. To test this hypothesis, we manipulated the pitch of auditory feedback in singing Bengalese finches. We found that learning driven by larger sensory errors was both slower than that resulting from smaller errors and showed less complete compensation for the imposed error. Additionally, we found that a simple principle could account for these data: the amount of compensation was proportional to the overlap between the baseline distribution of pitch production and the distribution experienced during the shift. Correspondingly, the fraction of compensation approached zero when pitch was shifted outside of the song's baseline pitch distribution. Our data demonstrate that sensory errors drive learning best when they fall within the range of production variability, suggesting that learning is constrained by the statistics of sensorimotor experience.



Timescales of inference in visual adaptation

B. Wark and A. Fairhall and F. Rieke

Neuron  61  750-61  (2009)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=19285471

Adaptation is a hallmark of sensory function. Adapting optimally requires matching the dynamics of adaptation to those of changes in the stimulus distribution. Here we show that the dynamics of adaptation in the responses of mouse retinal ganglion cells depend on stimulus history. We hypothesized that the accumulation of evidence for a change in the stimulus distribution controls the dynamics of adaptation, and developed a model for adaptation as an ongoing inference problem. Guided by predictions of this model, we found that the dynamics of adaptation depend on the discriminability of the change in stimulus distribution and that the retina exploits information contained in properties of the stimulus beyond the mean and variance to adapt more quickly when possible.



Linking the computational structure of variance adaptation to biophysical mechanisms

Y. Ozuysal and S. A. Baccus

Neuron  73  1002-15  (2012)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=22405209

In multiple sensory systems, adaptation to the variance of a sensory input changes the sensitivity, kinetics, and average response over timescales ranging from < 100 ms to tens of seconds. Here, we present a simple, biophysically relevant model of retinal contrast adaptation that accurately captures both the membrane potential response and all adaptive properties. The adaptive component of this model is a first-order kinetic process of the type used to describe ion channel gating and synaptic transmission. From the model, we conclude that all adaptive dynamics can be accounted for by depletion of a signaling mechanism, and that variance adaptation can be explained as adaptation to the mean of a rectified signal. The model parameters show strong similarity to known properties of bipolar cell synaptic vesicle pools. Diverse types of adaptive properties that implement theoretical principles of efficient coding can be generated by a single type of molecule or synapse with just a few microscopic states.



Expectation and surprise determine neural population responses in the ventral visual stream

T. Egner and J. M. Monti and C. Summerfield

J Neurosci  30  16601-8  (2010)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=21147999

Visual cortex is traditionally viewed as a hierarchy of neural feature detectors, with neural population responses being driven by bottom-up stimulus features. Conversely, "predictive coding" models propose that each stage of the visual hierarchy harbors two computationally distinct classes of processing unit: representational units that encode the conditional probability of a stimulus and provide predictions to the next lower level; and error units that encode the mismatch between predictions and bottom-up evidence, and forward prediction error to the next higher level. Predictive coding therefore suggests that neural population responses in category-selective visual regions, like the fusiform face area (FFA), reflect a summation of activity related to prediction ("face expectation") and prediction error ("face surprise"), rather than a homogenous feature detection response. We tested the rival hypotheses of the feature detection and predictive coding models by collecting functional magnetic resonance imaging data from the FFA while independently varying both stimulus features (faces vs houses) and subjects' perceptual expectations regarding those features (low vs medium vs high face expectation). The effects of stimulus and expectation factors interacted, whereby FFA activity elicited by face and house stimuli was indistinguishable under high face expectation and maximally differentiated under low face expectation. Using computational modeling, we show that these data can be explained by predictive coding but not by feature detection models, even when the latter are augmented with attentional mechanisms. Thus, population responses in the ventral visual stream appear to be determined by feature expectation and surprise rather than by stimulus features per se.



Predictive coding or evidence accumulation? False inference and neuronal fluctuations

G. Hesselmann and S. Sadaghiani and K. J. Friston and A. Kleinschmidt

PLoS One  5  e9926  (2010)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=20369004

Perceptual decisions can be made when sensory input affords an inference about what generated that input. Here, we report findings from two independent perceptual experiments conducted during functional magnetic resonance imaging (fMRI) with a sparse event-related design. The first experiment, in the visual modality, involved forced-choice discrimination of coherence in random dot kinematograms that contained either subliminal or periliminal motion coherence. The second experiment, in the auditory domain, involved free response detection of (non-semantic) near-threshold acoustic stimuli. We analysed fluctuations in ongoing neural activity, as indexed by fMRI, and found that neuronal activity in sensory areas (extrastriate visual and early auditory cortex) biases perceptual decisions towards correct inference and not towards a specific percept. Hits (detection of near-threshold stimuli) were preceded by significantly higher activity than both misses of identical stimuli or false alarms, in which percepts arise in the absence of appropriate sensory input. In accord with predictive coding models and the free-energy principle, this observation suggests that cortical activity in sensory brain areas reflects the precision of prediction errors and not just the sensory evidence or prediction errors per se.



A neuronal model of predictive coding accounting for the mismatch negativity

C. Wacongne and J.-P. Changeux and S. Dehaene

J Neurosci  32  3665-78  (2012)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=22423089

The mismatch negativity (MMN) is thought to index the activation of specialized neural networks for active prediction and deviance detection. However, a detailed neuronal model of the neurobiological mechanisms underlying the MMN is still lacking, and its computational foundations remain debated. We propose here a detailed neuronal model of auditory cortex, based on predictive coding, that accounts for the critical features of MMN. The model is entirely composed of spiking excitatory and inhibitory neurons interconnected in a layered cortical architecture with distinct input, predictive, and prediction error units. A spike-timing dependent learning rule, relying upon NMDA receptor synaptic transmission, allows the network to adjust its internal predictions and use a memory of the recent past inputs to anticipate on future stimuli based on transition statistics. We demonstrate that this simple architecture can account for the major empirical properties of the MMN. These include a frequency-dependent response to rare deviants, a response to unexpected repeats in alternating sequences (ABABAA…), a lack of consideration of the global sequence context, a response to sound omission, and a sensitivity of the MMN to NMDA receptor antagonists. Novel predictions are presented, and a new magnetoencephalography experiment in healthy human subjects is presented that validates our key hypothesis: the MMN results from active cortical prediction rather than passive synaptic habituation.


Temporal predictive codes for spoken words in auditory cortex

P. Gagnepain and R. N. Henson and M. H. Davis

Curr Biol  22  615-21  (2012)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=22425155

Humans can recognize spoken words with unmatched speed and accuracy. Hearing the initial portion of a word such as "formu…" is sufficient for the brain to identify "formula" from the thousands of other words that partially match. Two alternative computational accounts propose that partially matching words (1) inhibit each other until a single word is selected ("formula" inhibits "formal" by lexical competition) or (2) are used to predict upcoming speech sounds more accurately (segment prediction error is minimal after sequences like "formu…"). To distinguish these theories we taught participants novel words (e.g., "formubo") that sound like existing words ("formula") on two successive days. Computational simulations show that knowing "formubo" increases lexical competition when hearing "formu…", but reduces segment prediction error. Conversely, when the sounds in "formula" and "formubo" diverge, the reverse is observed. The time course of magnetoencephalographic brain responses in the superior temporal gyrus (STG) is uniquely consistent with a segment prediction account. We propose a predictive coding model of spoken word recognition in which STG neurons represent the difference between predicted and heard speech sounds. This prediction error signal explains the efficiency of human word recognition and simulates neural responses in auditory regions.



Effects of cue-triggered expectation on cortical processing of taste

C. L. Samuelsen and M. P. H. Gardner and A. Fontanini

Neuron  74  410-22  (2012)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=22542192

Animals are not passive spectators of the sensory world in which they live. In natural conditions they often sense objects on the bases of expectations initiated by predictive cues. Expectation profoundly modulates neural activity by altering the background state of cortical networks and modulating sensory processing. The link between these two effects is not known. Here, we studied how cue-triggered expectation of stimulus availability influences processing of sensory stimuli in the gustatory cortex (GC). We found that expected tastants were coded more rapidly than unexpected stimuli. The faster onset of sensory coding related to anticipatory priming of GC by associative auditory cues. Simultaneous recordings and pharmacological manipulations of GC and basolateral amygdala revealed the role of top-down inputs in mediating the effects of anticipatory cues. Altogether, these data provide a model for how cue-triggered expectation changes the state of sensory cortices to achieve rapid processing of natural stimuli.



Repetition suppression and expectation suppression are dissociable in time in early auditory evoked fields

A. Todorovic and F. P. de Lange

J Neurosci  32  13389-95  (2012)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=23015429

Repetition of a stimulus, as well as valid expectation that a stimulus will occur, both attenuate the neural response to it. These effects, repetition suppression and expectation suppression, are typically confounded in paradigms in which the nonrepeated stimulus is also relatively rare (e.g., in oddball blocks of mismatch negativity paradigms, or in repetition suppression paradigms with multiple repetitions before an alternation). However, recent hierarchical models of sensory processing inspire the hypothesis that the two might be separable in time, with repetition suppression occurring earlier, as a consequence of local transition probabilities, and suppression by expectation occurring later, as a consequence of learnt statistical regularities. Here we test this hypothesis in an auditory experiment by orthogonally manipulating stimulus repetition and stimulus expectation and, using magnetoencephalography, measuring the neural response over time in human subjects. We found that stimulus repetition (but not stimulus expectation) attenuates the early auditory response (40-60 ms), while stimulus expectation (but not stimulus repetition) attenuates the subsequent, intermediate stage of auditory processing (100-200 ms). These findings are well in line with hierarchical predictive coding models, which posit sequential stages of prediction error resolution, contingent on the level at which the hypothesis is generated.



Predictive feedback can account for biphasic responses in the lateral geniculate nucleus

J. F. M. Jehee and D. H. Ballard

PLoS Comput Biol  5  e1000373  (2009)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=19412529

Biphasic neural response properties, where the optimal stimulus for driving a neural response changes from one stimulus pattern to the opposite stimulus pattern over short periods of time, have been described in several visual areas, including lateral geniculate nucleus (LGN), primary visual cortex (V1), and middle temporal area (MT). We describe a hierarchical model of predictive coding and simulations that capture these temporal variations in neuronal response properties. We focus on the LGN-V1 circuit and find that after training on natural images the model exhibits the brain's LGN-V1 connectivity structure, in which the structure of V1 receptive fields is linked to the spatial alignment and properties of center-surround cells in the LGN. In addition, the spatio-temporal response profile of LGN model neurons is biphasic in structure, resembling the biphasic response structure of neurons in cat LGN. Moreover, the model displays a specific pattern of influence of feedback, where LGN receptive fields that are aligned over a simple cell receptive field zone of the same polarity decrease their responses while neurons of opposite polarity increase their responses with feedback. This phase-reversed pattern of influence was recently observed in neurophysiology. These results corroborate the idea that predictive feedback is a general coding strategy in the brain.



Coding of stimulus sequences by population responses in visual cortex

A. Benucci and D. L. Ringach and M. Carandini

Nat Neurosci  12  1317-24  (2009)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=19749748

Neuronal populations in sensory cortex represent time-changing sensory input through a spatiotemporal code. What are the rules that govern this code? We measured membrane potentials and spikes from neuronal populations in cat visual cortex (V1) using voltage-sensitive dyes and electrode arrays. We first characterized the population response to a single orientation. As response amplitude grew, the population tuning width remained constant for membrane potential responses and became progressively sharper for spike responses. We then asked how these single-orientation responses combine to code for successive orientations. We found that they combined through simple linear summation. Linearity, however, was violated after stimulus offset, when responses exhibited an unexplained persistence. As a result of linearity, the interactions between responses to successive stimuli were minimal. Our results indicate that higher cortical areas may reconstruct the stimulus sequence from V1 population responses through a simple instantaneous decoder. Therefore, spatial and temporal codes in area V1 operate largely independently.



Predictive coding as a model of response properties in cortical area V1

M. W. Spratling

J Neurosci  30  3531-43  (2010)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=20203213

A simple model is shown to account for a large range of V1 classical, and nonclassical, receptive field properties including orientation tuning, spatial and temporal frequency tuning, cross-orientation suppression, surround suppression, and facilitation and inhibition by flankers and textured surrounds. The model is an implementation of the predictive coding theory of cortical function and thus provides a single computational explanation for a diverse range of neurophysiological findings. Furthermore, since predictive coding can be related to the biased competition theory and is a specific example of more general theories of hierarchical perceptual inference, the current results relate V1 response properties to a wider, more unified, framework for understanding cortical function.



Predictive coding accounts for V1 response properties recorded using reverse correlation

M. W. Spratling

Biol Cybern  106  37-49  (2012)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=22350506

PC/BC ("Predictive coding/Biased competition") is a simple computational model that has previously been shown to explain a very wide range of V1 response properties. This article extends work on the PC/BC model of V1 by showing that it can also account for V1 response properties measured using the reverse correlation methodology. Reverse correlation employs an experimental procedure that is significantly different from that used in more typical neurophysiological experiments, and measures some distinctly different response properties in V1. Despite these differences PC/BC successfully accounts for the data. The current results thus provide additional support for the PC/BC model of V1 and further demonstrate that PC/BC offers a unified explanation for the seemingly diverse range of behaviours observed in primary visual cortex.



Spontaneous cortical activity reveals hallmarks of an optimal internal model of the environment

P. Berkes and G. Orbán and M. Lengyel and J. Fiser

Science  331  83-7  (2011)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=21212356

The brain maintains internal models of its environment to interpret sensory inputs and to prepare actions. Although behavioral studies have demonstrated that these internal models are optimally adapted to the statistics of the environment, the neural underpinning of this adaptation is unknown. Using a Bayesian model of sensory cortical processing, we related stimulus-evoked and spontaneous neural activities to inferences and prior expectations in an internal model and predicted that they should match if the model is statistically optimal. To test this prediction, we analyzed visual cortical activity of awake ferrets during development. Similarity between spontaneous and evoked activities increased with age and was specific to responses evoked by natural scenes. This demonstrates the progressive adaptation of internal models to the statistics of natural stimuli at the neural level.



Suppression of cortical neural variability is stimulus- and state-dependent

B. White and L. F. Abbott and J. Fiser

J Neurophysiol  108  2383-92  (2012)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=22896720

Internally generated, spontaneous activity is ubiquitous in the cortex, yet it does not appear to have a significant negative impact on sensory processing. Various studies have found that stimulus onset reduces the variability of cortical responses, but the characteristics of this suppression remained unexplored. By recording multiunit activity from awake and anesthetized rats, we investigated whether and how this noise suppression depends on properties of the stimulus and on the state of the cortex. In agreement with theoretical predictions, we found that the degree of noise suppression in awake rats has a nonmonotonic dependence on the temporal frequency of a flickering visual stimulus with an optimal frequency for noise suppression ~2 Hz. This effect cannot be explained by features of the power spectrum of the spontaneous neural activity. The nonmonotonic frequency dependence of the suppression of variability gradually disappears under increasing levels of anesthesia and shifts to a monotonic pattern of increasing suppression with decreasing frequency. Signal-to-noise ratios show a similar, although inverted, dependence on cortical state and frequency. These results suggest the existence of an active noise suppression mechanism in the awake cortical system that is tuned to support signal propagation and coding.