NSci 8217 Spring 2012:Perceptual Constancy

Spatial updating and the maintenance of visual constancy

E. M. Klier and D. E. Angelaki

Neuroscience  156  801-18  (2008)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=18786618

Spatial updating is the means by which we keep track of the locations of objects in space even as we move. Four decades of research have shown that humans and non-human primates can take the amplitude and direction of intervening movements into account, including saccades (both head-fixed and head-free), pursuit, whole-body rotations and translations. At the neuronal level, spatial updating is thought to be maintained by receptive field locations that shift with changes in gaze, and evidence for such shifts has been shown in several cortical areas. These regions receive information about the intervening movement from several sources including motor efference copies when a voluntary movement is made and vestibular/somatosensory signals when the body is in motion. Many of these updating signals arise from brainstem regions that monitor our ongoing movements and subsequently transmit this information to the cortex via pathways that likely include the thalamus. Several issues of debate include (1) the relative contribution of extra-retinal sensory and efference copy signals to spatial updating, (2) the source of an updating signal for real life, three-dimensional motion that cannot arise from brain areas encoding only two-dimensional commands, and (3) the reference frames used by the brain to integrate updating signals from various sources. This review highlights the relevant spatial updating studies and provides a summary of the field today. We find that spatial constancy is maintained by a highly evolved neural mechanism that keeps track of our movements, transmits this information to relevant brain regions, and then uses this information to change the way in which single neurons respond. In this way, we are able to keep track of relevant objects in the outside world and interact with them in meaningful ways.



The role of chromatic scene statistics in color constancy: spatial integration

J. Golz

J Vis  8  6.1-16  (2008)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=19146336

The human visual system has the ability to perceive approximately constant surface colors despite changes in the retinal input that are induced by changes in illumination. Based on computational analyses as well as psychophysical experiments, J. Golz and D. I. MacLeod (2002) proposed that the correlation between luminance and redness within the retinal image of a scene is used as a cue to the chromatic properties of the illuminant. However, J. J. Granzier, E. Brenner, F. W. Cornelissen, and J. B. Smeets (2005) found that the spatial extent in the field of vision that is relevant for the effect of the luminance-redness correlation on color appearance is very local and therefore questioned whether this scene statistic is used for estimating the illuminant. Here, I present evidence that the spatial extent is substantially more global than claimed by Granzier et al. and consistent with the hypothesis that this scene statistic is used for estimating the illuminant. It is further shown for two figural parameters of the stimuli that they influence the spatial extent and hence could have contributed to an underestimation of the spatial extent by Granzier et al. Finally, it is shown that the spatial extent relevant for the effect of mean surround chromaticity on color appearance is very similar to that found for the luminance-redness correlation.



Can illumination estimates provide the basis for color constancy?

J. J. M. Granzier and E. Brenner and J. B. J. Smeets

J Vis  9  18.1-11  (2009)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=19757957

Objects hardly appear to change color when the spectral distribution of the illumination changes: a phenomenon known as color constancy. Color constancy could either be achieved by relying on properties that are insensitive to changes in the illumination (such as spatial color contrast) or by compensating for the estimated chromaticity of the illuminant. We examined whether subjects can judge the illuminant's color well enough to account for their own color constancy. We found that subjects were very poor at judging the color of a lamp from the light reflected by the scene it illuminated. They were much better at judging the color of a surface within the scene. We conclude that color constancy must be achieved by relying on relationships that are insensitive to the illumination rather than by explicitly judging the color of the illumination.



Spatial constancy mechanisms in motor control

W. P. Medendorp

Philos Trans R Soc Lond B Biol Sci  366  476-91  (2011)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=21242137

The success of the human species in interacting with the environment depends on the ability to maintain spatial stability despite the continuous changes in sensory and motor inputs owing to movements of eyes, head and body. In this paper, I will review recent advances in the understanding of how the brain deals with the dynamic flow of sensory and motor information in order to maintain spatial constancy of movement goals. The first part summarizes studies in the saccadic system, showing that spatial constancy is governed by a dynamic feed-forward process, by gaze-centred remapping of target representations in anticipation of and across eye movements. The subsequent sections relate to other oculomotor behaviour, such as eye-head gaze shifts, smooth pursuit and vergence eye movements, and their implications for feed-forward mechanisms for spatial constancy. Work that studied the geometric complexities in spatial constancy and saccadic guidance across head and body movements, distinguishing between self-generated and passively induced motion, indicates that both feed-forward and sensory feedback processing play a role in spatial updating of movement goals. The paper ends with a discussion of the behavioural mechanisms of spatial constancy for arm motor control and their physiological implications for the brain. Taken together, the emerging picture is that the brain computes an evolving representation of three-dimensional action space, whose internal metric is updated in a nonlinear way, by optimally integrating noisy and ambiguous afferent and efferent signals.



Dynamic sound localization during rapid eye-head gaze shifts

J. Vliegen and T. J. Van Grootel and A. J. Van Opstal

J Neurosci  24  9291-302  (2004)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=15496665

Human sound localization relies on implicit head-centered acoustic cues. However, to create a stable and accurate representation of sounds despite intervening head movements, the acoustic input should be continuously combined with feedback signals about changes in head orientation. Alternatively, the auditory target coordinates could be updated in advance by using either the preprogrammed gaze-motor command or the sensory target coordinates to which the intervening gaze shift is made ("predictive remapping"). So far, previous experiments cannot dissociate these alternatives. Here, we study whether the auditory system compensates for ongoing saccadic eye and head movements in two dimensions that occur during target presentation. In this case, the system has to deal with dynamic changes of the acoustic cues as well as with rapid changes in relative eye and head orientation that cannot be preprogrammed by the audiomotor system. We performed visual-auditory double-step experiments in two dimensions in which a brief sound burst was presented while subjects made a saccadic eye-head gaze shift toward a previously flashed visual target. Our results show that localization responses under these dynamic conditions remain accurate. Multiple linear regression analysis revealed that the intervening eye and head movements are fully accounted for. Moreover, elevation response components were more accurate for longer-duration sounds (50 msec) than for extremely brief sounds (3 msec), for all localization conditions. Taken together, these results cannot be explained by a predictive remapping scheme. Rather, we conclude that the human auditory system adequately processes dynamically varying acoustic cues that result from self-initiated rapid head movements to construct a stable representation of the target in world coordinates. This signal is subsequently used to program accurate eye and head localization responses.



Predictive remapping of visual features precedes saccadic eye movements

D. Melcher

Nat Neurosci  10  903-7  (2007)

/net/pooh/mnt/raid0/Reprints/PMID/17589507.pdf


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=17589507

The frequent occurrence of saccadic eye movements raises the question of how information is combined across separate glances into a stable, continuous percept. Here I show that visual form processing is altered at both the current fixation position and the location of the saccadic target before the saccade. When human observers prepared to follow a displacement of the stimulus with the eyes, visual form adaptation was transferred from current fixation to the future gaze position. This transfer of adaptation also influenced the perception of test stimuli shown at an intermediate position between fixation and saccadic target. Additionally, I found a presaccadic transfer of adaptation when observers prepared to move their eyes toward a stationary adapting stimulus in peripheral vision. The remapping of visual processing, demonstrated here with form adaptation, may help to explain our impression of a smooth transition, with no temporal delay, of visual perception across glances.



Electrophysiological correlates of presaccadic remapping in humans

N. A. Parks and P. M. Corballis

Psychophysiology  45  776-83  (2008)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=18513363

Saccadic eye movements cause rapid displacements of space, yet the visual field is perceived as stable. A mechanism that may contribute to maintaining visual stability is the process of predictive remapping, in which receptive fields shift to their future locations prior to the onset of a saccade. We investigated electrophysiological correlates of remapping in humans using event-related potentials. Subjects made horizontal saccades that caused a visual stimulus to remain within a single visual field or to cross the vertical meridian, shifting between visual hemifields. When an impending saccade would shift the stimulus between visual fields (requiring remapping between cerebral hemispheres), presaccadic potentials showed increased bilaterality, having greater amplitudes over the hemisphere ipsilateral to the grating stimulus. Results are consistent with interhemispheric remapping of visual space in anticipation of an upcoming saccade.



Evidence for the predictive remapping of visual attention

S. Mathôt and J. Theeuwes

Exp Brain Res  200  117-22  (2010)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=19882149

When attending an object in visual space, perception of the object remains stable despite frequent eye movements. It is assumed that visual stability is due to the process of remapping, in which retinotopically organized maps are updated to compensate for the retinal shifts caused by eye movements. Remapping is predictive when it starts before the actual eye movement. Until now, most evidence for predictive remapping has been obtained in single cell studies involving monkeys. Here, we report that predictive remapping affects visual attention prior to an eye movement. Immediately following a saccade, we show that attention has partly shifted with the saccade (Experiment 1). Importantly, we show that remapping is predictive and affects the locus of attention prior to saccade execution (Experiments 2 and 3): before the saccade was executed, there was attentional facilitation at the location which, after the saccade, would retinotopically match the attended location.



Auditory spatial perception dynamically realigns with changing eye position

B. Razavi and W. E. O'Neill and G. D. Paige

J Neurosci  27  10249-58  (2007)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=17881531

Audition and vision both form spatial maps of the environment in the brain, and their congruency requires alignment and calibration. Because audition is referenced to the head and vision is referenced to movable eyes, the brain must accurately account for eye position to maintain alignment between the two modalities as well as perceptual space constancy. Changes in eye position are known to variably, but inconsistently, shift sound localization, suggesting subtle shortcomings in the accuracy or use of eye position signals. We systematically and directly quantified sound localization across a broad spatial range and over time after changes in eye position. A sustained fixation task addressed the spatial (steady-state) attributes of eye position-dependent effects on sound localization. Subjects continuously fixated visual reference spots straight ahead (center), to the left (20 degrees), or to the right (20 degrees) of the midline in separate sessions while localizing auditory targets using a laser pointer guided by peripheral vision. An alternating fixation task focused on the temporal (dynamic) aspects of auditory spatial shifts after changes in eye position. Localization proceeded as in sustained fixation, except that eye position alternated between the three fixation references over multiple epochs, each lasting minutes. Auditory space shifted by approximately 40% toward the new eye position and dynamically over several minutes. We propose that this spatial shift reflects an adaptation mechanism for aligning the "straight-ahead" of perceived sensory-motor maps, particularly during early childhood when normal ocular alignment is achieved, but also resolving challenges to normal spatial perception throughout life.




Keeping the world a constant size: object constancy in human touch

M. Taylor-Clarke and P. Jacobsen and P. Haggard

Nat Neurosci  7  219-20  (2004)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=14966526

The perceived size of objects touching different regions of skin varies across the body surface by much less than is predicted from variations in tactile receptor density. Here we show that altering the visual experience of the body alters perceived tactile distances. We propose that the brain attempts to preserve tactile size constancy by rescaling the primary, distorted body-surface representation into object-centered space according to visual experience of the body.



Investigation of perceptual constancy in the temporal-envelope domain

M. Ardoint and C. Lorenzi and D. Pressnitzer and A. Gorea

J Acoust Soc Am  123  1591-601  (2008)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=18345847

The ability to discriminate complex temporal envelope patterns submitted to temporal compression or expansion was assessed in normal-hearing listeners. An XAB, matching-to-sample-procedure was used. X, the reference stimulus, is obtained by applying the sum of two, inharmonically related, sinusoids to a broadband noise carrier. A and B are obtained by multiplying the frequency of each modulation component of X by the same time expansion/compression factor, alpha (alphain[0.35-2.83]). For each trial, A or B is a time-reversed rendering of X, and the listeners' task is to choose which of the two is matched by X. Overall, the results indicate that discrimination performance degrades for increasing amounts of time expansion/compression (i.e., when alpha departs from 1), regardless of the frequency spacing of modulation components and the peak-to-trough ratio of the complex envelopes. An auditory model based on envelope extraction followed by a memory-limited, template-matching process accounted for results obtained without time scaling of stimuli, but generally underestimated discrimination ability with either time expansion or compression, especially with the longer stimulus durations. This result is consistent with partial or incomplete perceptual normalization of envelope patterns.



Comparing passive and active hearing: spectral analysis of transient sounds in bats

H. R. Goerlitz and M. Hübner and L. Wiegrebe

J Exp Biol  211  1850-8  (2008)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=18515714

In vision, colour constancy allows the evaluation of the colour of objects independent of the spectral composition of a light source. In the auditory system, comparable mechanisms have been described that allows the evaluation of the spectral shape of sounds independent of the spectral composition of ambient background sounds. For echolocating bats, the evaluation of spectral shape is vitally important both for the analysis of external sounds and the analysis of the echoes of self-generated sonar emissions. Here, we investigated how the echolocating bat Phyllostomus discolor evaluates the spectral shape of transient sounds both in passive hearing and in echolocation as a specialized mode of active hearing. Bats were trained to classify transients of different spectral shape as low- or highpass. We then assessed how the spectral shape of an ambient background noise influenced the spontaneous classification of the transients. In the passive-hearing condition, the bats spontaneously changed their classification boundary depending on the spectral shape of the background. In the echo-acoustic condition, the classification boundary did not change although the background- and spectral-shape manipulations were identical in the two conditions. These data show that auditory processing differs between passive and active hearing: echolocation represents an independent mode of active hearing with its own rules of auditory spectral analysis.



Auditory color constancy: calibration to reliable spectral properties across nonspeech context and targets

C. E. Stilp and J. M. Alexander and M. Kiefte and K. R. Kluender

Atten Percept Psychophys  72  470-80  (2010)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=20139460

Brief experience with reliable spectral characteristics of a listening context can markedly alter perception of subsequent speech sounds, and parallels have been drawn between auditory compensation for listening context and visual color constancy. In order to better evaluate such an analogy, the generality of acoustic context effects for sounds with spectral-temporal compositions distinct from speech was investigated. Listeners identified nonspeech sounds-extensively edited samples produced by a French horn and a tenor saxophone-following either resynthesized speech or a short passage of music. Preceding contexts were "colored" by spectral envelope difference filters, which were created to emphasize differences between French horn and saxophone spectra. Listeners were more likely to report hearing a saxophone when the stimulus followed a context filtered to emphasize spectral characteristics of the French horn, and vice versa. Despite clear changes in apparent acoustic source, the auditory system calibrated to relatively predictable spectral characteristics of filtered context, differentially affecting perception of subsequent target nonspeech sounds. This calibration to listening context and relative indifference to acoustic sources operates much like visual color constancy, for which reliable properties of the spectrum of illumination are factored out of perception of color.



Temporal-envelope constancy of speech in rooms and the perceptual weighting of frequency bands

A. J. Watkins and A. P. Raimond and S. J. Makin

J Acoust Soc Am  130  2777-88  (2011)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=22087906

Three experiments measured constancy in speech perception, using natural-speech messages or noise-band vocoder versions of them. The eight vocoder-bands had equally log-spaced center-frequencies and the shapes of corresponding "auditory" filters. Consequently, the bands had the temporal envelopes that arise in these auditory filters when the speech is played. The "sir" or "stir" test-words were distinguished by degrees of amplitude modulation, and played in the context; "next you'll get _ to click on." Listeners identified test-words appropriately, even in the vocoder conditions where the speech had a "noise-like" quality. Constancy was assessed by comparing the identification of test-words with low or high levels of room reflections across conditions where the context had either a low or a high level of reflections. Constancy was obtained with both the natural and the vocoded speech, indicating that the effect arises through temporal-envelope processing. Two further experiments assessed perceptual weighting of the different bands, both in the test word and in the context. The resulting weighting functions both increase monotonically with frequency, following the spectral characteristics of the test-word's [s]. It is suggested that these two weighting functions are similar because they both come about through the perceptual grouping of the test-word's bands.



Contrast constancy in natural scenes in shadow or direct light: A proposed role for contrast-normalisation (non-specific suppression) in visual cortex

J. S. Lauritzen and D. J. Tolhurst

Network  16  151-73  (2005)

/net/pooh/mnt/raid0/Reprints/PMID/16411494.pdf


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=16411494

The range of contrasts in natural scenes is generally thought to far exceed the limited dynamic ranges of individual contrast-encoding neurons in the primary visual cortex. The visual system may employ gain-control mechanisms (Ohzawa et al. 1985) to compensate for the mismatch between the range of natural contrast energies and the limited dynamic range of visual neurons; one proposed mechanism is contrast normalisation or non-specific suppression (Heeger 1992a). This paper aims to evaluate the role of contrast normalisation in human contrast perception, using a computer model of primary visual cortex. The model uses orthogonal pairs of Gabor patches to simulate simple-cell receptive-fields to calculate local, band-limited contrast in a series of 50 digitised photographs of natural scenes. The average range of contrast energies in each image was 2.29 log units, while the "lifetime range" each model simple cell would see across all images was 2.98 log units. These ranges are greater than the dynamic range of real mammalian simple cells. Contrast normalisation (dividing contrast responses by the summed responses of all nearby neurons) reduces contrast ranges, perhaps sufficiently to match them to neurons' limited dynamic ranges. Comparison of images taken under diffuse and direct lighting conditions showed that contrast normalisation can sometimes match these conditions effectively. This may lead to perceptual contrast constancy in the face of spurious changes in contrast caused by natural environmental conditions.



Perceptual distance and the constancy of size and stereoscopic depth

L. Kaufman and J. H. Kaufman and R. Noble and S. Edlund and S. Bai and T. King

Spat Vis  19  439-57  (2006)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=17131650

The relationship between distance and size perception is unclear because of conflicting results of tests investigating the size-distance invariance hypothesis (SDIH), according to which perceived size is proportional to perceived distance. We propose that response bias with regard to measures of perceived distance is at the root of the conflict. Rather than employ the usual method of magnitude estimation, the bias-free two-alternative forced choice (2AFC) method was used to determine the precision (1/sigma) of discriminating depth at different distances. The results led us to define perceptual distance as a bias free power function of physical distance, with an exponent of approximately 0.5. Similar measures involving size differences among stimuli of equal angular size yield the same power function of distance. In addition, size discrimination is noisier than depth discrimination, suggesting that distance information is processed prior to angular size. Size constancy implies that the perceived size is proportional to perceptual distance. Moreover, given a constant relative disparity, depth constancy implies that perceived depth is proportional to the square of perceptual distance. However, the function relating the uncertainties of depth and of size discrimination to distance is the same. Hence, depth and size constancy may be accounted for by the same underlying law.




Selective visual attention ensures constancy of sensory representations: testing the influence of perceptual load and spatial competition

D. Wegener and F. O. Galashan and D. N. Markowski and A. K. Kreiter

Vision Res  46  3563-74  (2006)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=16879852

We report findings from several variants of a psychophysical experiment using an acceleration detection task in which we tested predictions derived from recent neurophysiological data obtained from monkey area MT. The task was designed as a Posner paradigm and required subjects to detect the speed-up of a moving bar, cued with 75% validity. Displays varied according to number of simultaneously presented objects, spatial distance, and difficulty of the task. All data obtained under different levels of competition with multiple objects were compared to a corresponding condition, in which only a single moving bar was presented in the absence of any interfering distracter object. For attended objects, subjects did not show any difference in their ability to detect accelerations, regardless of the strength of inter-object competition or spatial distance. This finding was consistent in all of the experiments, and was even obtained when the acceleration was made hardly detectable. In contrast, increasing competitive interactions either by enhancing number of objects or spatial proximity resulted in strong reduction of performance for non-attended objects. The findings support current noise reduction models and suggest that attention adjusts neuronal processing to ensure a constant sensory representation of the attended object as if this object was the only one in the scene.



Perceptual learning depends on perceptual constancy

P. Garrigan and P. J. Kellman

Proc Natl Acad Sci U S A  105  2248-53  (2008)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=18250303

Perceptual learning refers to experience-induced improvements in the pick-up of information. Perceptual constancy describes the fact that, despite variable sensory input, perceptual representations typically correspond to stable properties of objects. Here, we show evidence of a strong link between perceptual learning and perceptual constancy: Perceptual learning depends on constancy-based perceptual representations. Perceptual learning may involve changes in early sensory analyzers, but such changes may in general be constrained by categorical distinctions among the high-level perceptual representations to which they contribute. Using established relations of perceptual constancy and sensory inputs, we tested the ability to discover regularities in tasks that dissociated perceptual and sensory invariants. We found that human subjects could learn to classify based on a perceptual invariant that depended on an underlying sensory invariant but could not learn the identical sensory invariant when it did not correlate with a perceptual invariant. These results suggest that constancy-based representations, known to be important for thought and action, also guide learning and plasticity.



Lightness constancy and illumination discounting

A. D. Logvinenko and R. Tokunaga

Atten Percept Psychophys  73  1886-902  (2011)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=21688072

Contrary to the implication of the term "lightness constancy", asymmetric lightness matching has never been found to be perfect unless the scene is highly articulated (i.e., contains a number of different reflectances). Also, lightness constancy has been found to vary for different observers, and an effect of instruction (lightness vs. brightness) has been reported. The elusiveness of lightness constancy presents a great challenge to visual science; we revisit these issues in the following experiment, which involved 44 observers in total. The stimuli consisted of a large sheet of black paper with a rectangular spotlight projected onto the lower half and 40 squares of various shades of grey printed on the upper half. The luminance ratio at the edge of the spotlight was 25, while that of the squares varied from 2 to 16. Three different instructions were given to observers: They were asked to find a square in the upper half that (i) looked as if it was made of the same paper as that on which the spotlight fell (lightness match), (ii) had the same luminance contrast as the spotlight edge (contrast match), or (iii) had the same brightness as the spotlight (brightness match). Observers made 10 matches of each of the three types. Great interindividual variability was found for all three types of matches. In particular, the individual Brunswik ratios were found to vary over a broad range (from .47 to .85). That is, lightness matches were found to be far from veridical. Contrast matches were also found to be inaccurate, being on average, underestimated by a factor of 3.4. Articulation was found to essentially affect not only lightness, but contrast and brightness matches as well. No difference was found between the lightness and luminance contrast matches. While the brightness matches significantly differed from the other matches, the difference was small. Furthermore, the brightness matches were found to be subject to the same interindividual variability and the same effect of articulation. This leads to the conclusion that inexperienced observers are unable to estimate both the brightness and the luminance contrast of the light reflected from real objects lit by real lights. None of our observers perceived illumination edges purely as illumination edges: A partial Gelb effect ("partial illumination discounting") always took place. The lightness inconstancy in our experiment resulted from this partial illumination discounting. We propose an account of our results based on the two-dimensionality of achromatic colour. We argue that large interindividual variations and the effect of articulation are caused by the large ambiguity of luminance ratios in the stimulus displays used in laboratory conditions.



Differential intrinsic bias of the 3-D perceptual environment and its role in shape constancy

A. Aznar-Casanova and M. S. Keil and M. Moreno and H. Supèr

Exp Brain Res  215  35-43  (2011)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=21927826

In a three-dimensional (3-D) environment, sensory information is projected on a 2-D retina with the consequence that the visual system needs space information for accurately reconstructing the visual world. However, the 3-D environment is not accurately represented in the brain; in particular, the perception of distances in depth is imprecise. It has been argued that the visual system has an intrinsic bias of visual space where targets located on the ground floor are perceived on an implicit elevated surface. We studied how such an intrinsic bias of visual space affects shape constancy. We found that the projected shape of a semicircle can be explained taking into account a differential implicit slant surface. The depth/width ratio, which is a measure for the shape of the stimulus, is overestimated for angular declination smaller than ~60$\,^{\circ}$, while it is underestimated for larger angular declinations. Our results are important for explaining shape constancy and may be important for understanding some perceptual illusions.



Perceptual constancy of texture roughness in the tactile system

T. Yoshioka and J. C. Craig and G. C. Beck and S. S. Hsiao

J Neurosci  31  17603-11  (2011)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=22131421

Our tactual perception of roughness is independent of the manner in which we touch the surface. A brick surface feels rough no matter how slowly or how rapidly we move our fingers, despite the fluctuating sensory inputs that are transmitted to the finger. Current theories of roughness perception rely solely on inputs from the cutaneous afferents, which are highly affected by scan velocity and force. The question then is: how is roughness constancy achieved? To this end, we characterized the subject's perceived roughness in six scanning conditions. These included two modes of touch: direct touch, where the finger is in contact with the surface, and indirect touch, where the surface is scanned with a hand-held probe; and three scanning modes: active (moving the hand across a stationary surface), passive (moving the surface across a stationary hand), and pseudo-passive (subject's hand is moved by the experimenter across a stationary surface). Here, we show that roughness constancy is preserved during active but not passive scanning, indicating that the hand movement is necessary for roughness constancy in both direct and indirect touch. Roughness constancy is also preserved during pseudo-passive scanning, which stresses the importance of proprioceptive input. The results show that cutaneous input provides the signals necessary for roughness perception and that proprioceptive input resulting from hand movement-rather than a motor efference copy-is necessary to achieve roughness constancy. These findings have important implications in providing realistic sensory feedback for prosthetic-hand users.



Localization of visual and auditory stimuli during smooth pursuit eye movements

K. Königs and F. Bremmer

J Vis  10  8  (2010)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=20884583

Humans move their eyes more often than their heart beats. Although these eye movements induce large retinal image shifts, we perceive our world as stable. Yet, this perceptual stability is not complete. A number of studies have shown that visual targets which are briefly presented during such eye movements are mislocalized in a characteristic manner. It is largely unknown, however, if auditory stimuli are also mislocalized, i.e. whether or not perception generalizes across senses and space is represented supramodally. In our current study subjects were asked to localize brief visual and auditory stimuli that were presented during smooth pursuit in the dark. In addition, we measured auditory and visual detection thresholds. Confirming previous studies, perceived visual positions were shifted in direction of the pursuit. This shift was stronger for the hemifield the eye was heading towards (foveopetal). Perceptual auditory space was compressed towards the pursuit target (ventriloquism effect). This perceptual error was slightly reduced during pursuit as compared to fixation and differed clearly from the mislocalization of visual targets. While we found an influence of pursuit on localization, we found no such effect on the detection of visual and auditory stimuli. Taken together, our results do not provide evidence for the hypothesis of a supramodal representation of space during active oculomotor behavior.



Visual suppression in the superior colliculus around the time of microsaccades

M. Rolfs and S. Ohl

J Neurophysiol  105  1-3  (2011)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=21084681

Miniature eye movements jitter the retinal image unceasingly, raising the question of how perceptual continuity is achieved during visual fixation. Recent work discovered suppression of visual bursts in the superior colliculus around the time of microsaccades, tiny jerks of the eyes that support visual perception while gaze is fixed. This finding suggests that corollary discharge, supporting visual stability when rapid eye movements drastically shift the retinal image, may also exist for the smallest saccades.



Saccadic suppression of displacement in face of saccade adaptation

S. Klingenhoefer and F. Bremmer

Vision Res  51  881-9  (2011)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=21163288

Saccades challenge visual perception since they induce large shifts of the image on the retina. Nevertheless, we perceive the outer world as being stable. The saccadic system also can rapidly adapt to changes in the environment (saccadic adaptation). In such case, a dissociation is introduced between a driving visual signal (the original saccade target) and a motor output (the adapted saccade vector). The question arises, how saccadic adaptation interferes with perceptual visual stability. In order to answer this question, we engaged human subjects in a saccade adaptation paradigm and interspersed trials in which the saccade target was displaced perisaccadically to a random position. In these trials subjects had to report on their perception of displacements of the saccade target. Subjects were tested in two conditions. In the 'blank' condition, the saccade target was briefly blanked after the end of the saccade. In the 'no-blank' condition the target was permanently visible. Confirming previous findings, the visual system was rather insensitive to displacements of the saccade target in an unadapted state, an effect termed saccadic suppression of displacement (SSD). In all adaptation conditions, we found spatial perception to correlate with the adaptive changes in saccade landing site. In contrast, small changes in saccade amplitude that occurred on a trial by trial basis did not correlate with perception. In the 'no-blank' condition we observed a prominent increase in suppression strength during backward adaptation. We discuss our findings in the context of existing theories on transsaccadic perceptual stability and its neural basis.



Perceptual classification in a rapidly changing environment

C. Summerfield and T. E. Behrens and E. Koechlin

Neuron  71  725-36  (2011)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=21867887

Humans and monkeys can learn to classify perceptual information in a statistically optimal fashion if the functional groupings remain stable over many hundreds of trials, but little is known about categorization when the environment changes rapidly. Here, we used a combination of computational modeling and functional neuroimaging to understand how humans classify visual stimuli drawn from categories whose mean and variance jumped unpredictably. Models based on optimal learning (Bayesian model) and a cognitive strategy (working memory model) both explained unique variance in choice, reaction time, and brain activity. However, the working memory model was the best predictor of performance in volatile environments, whereas statistically optimal performance emerged in periods of relative stability. Bayesian and working memory models predicted decision-related activity in distinct regions of the prefrontal cortex and midbrain. These findings suggest that perceptual category judgments, like value-guided choices, may be guided by multiple controllers.



Effect of saccadic adaptation on localization of visual targets

H. Awater and D. Burr and M. Lappe and M. C. Morrone and M. E. Goldberg

J Neurophysiol  93  3605-14  (2005)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=15843478

Objects flashed briefly around the time of a saccadic eye movement are grossly mislocalized by human subjects, so they appear to be compressed toward the endpoint of the saccade. In this study, we investigate spatial localization during saccadic adaptation to examine whether the focus of compression tends toward the intended saccadic target or at the endpoint of the actual (adapted) movement. We report two major results. First, that peri-saccadic focus of the compression did not occur at the site of the initial saccadic target, but tended toward the actual landing site of the saccade. Second, and more surprisingly, we observed a large long-term perceptual distortion of space, lasting for hundreds of milliseconds. This distortion did not occur over the whole visual field but was limited to a local region of visual space around the saccade target, suggesting that saccadic adaptation induces a visuo-topic remapping of space. The results imply that the mechanisms controlling saccadic adaptation also affect perception of space and point to a strong perceptual plasticity coordinated with the well-documented plasticity of the motor system.



Spatiotemporal distortions of visual perception at the time of saccades

P. Binda and G. M. Cicchini and D. C. Burr and M. C. Morrone

J Neurosci  29  13147-57  (2009)

/net/pooh/mnt/raid0/Reprints/PMID/19846702.pdf


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=19846702

Both space and time are grossly distorted during saccades. Here we show that the two distortions are strongly linked, and that both could be a consequence of the transient remapping mechanisms that affect visual neurons perisaccadically. We measured perisaccadic spatial and temporal distortions simultaneously by asking subjects to report both the perceived spatial location of a perisaccadic vertical bar (relative to a remembered ruler), and its perceived timing (relative to two sounds straddling the bar). During fixation and well before or after saccades, bars were localized veridically in space and in time. In different epochs of the perisaccadic interval, temporal perception was subject to different biases. At about the time of the saccadic onset, bars were temporally mislocalized 50-100 ms later than their actual presentation and spatially mislocalized toward the saccadic target. Importantly, the magnitude of the temporal distortions co-varied with the spatial localization bias and the two phenomena had similar dynamics. Within a brief period about 50 ms before saccadic onset, stimuli were perceived with shorter latencies than at other delays relative to saccadic onset, suggesting that the perceived passage of time transiently inverted its direction. Based on this result we could predict the inversion of perceived temporal order for two briefly flashed visual stimuli. We developed a model that simulates the perisaccadic transient change of neuronal receptive fields predicting well the reported temporal distortions. The key aspects of the model are the dynamics of the "remapped" activity and the use of decoder operators that are optimal during fixation, but are not updated perisaccadically.



Directional remapping in tactile inter-finger apparent motion: a motion aftereffect study

S. Kuroki and J. Watanabe and K. Mabuchi and S. Tachi and S. Nishida

Exp Brain Res      (2011)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=22080151

Tactile motion provides critical information for perception and manipulation of objects in touch. Perceived directions of tactile motion are primarily defined in the environmental coordinate, which means they change drastically with body posture even when the same skin sensors are stimulated. Despite the ecological importance of this perceptual constancy, the sensory processing underlying tactile directional remapping remains poorly understood. The present study psychophysically investigated the mechanisms underlying directional remapping in human tactile motion processing by examining whether finger posture modulates the direction of the tactile motion aftereffect (MAE) induced by inter-finger apparent motions. We introduced conflicts in the adaptation direction between somatotopic and environmental spaces by having participants change their finger posture between adaptation and test phases. In a critical condition, they touched stimulators with crossed index and middle fingers during adaptation but with uncrossed fingers during tests. Since the adaptation effect was incongruent between the somatotopic and environmental spaces, the direction of the MAE reflects the coordinate of tactile motion processing. The results demonstrated that the tactile MAE was induced in accordance with the motion direction determined by the environmental rather than the somatotopic space. In addition, it was found that though the physical adaptation of the test fingers was not changed, the tactile MAE disappeared when the adaptation stimuli were vertically aligned or when subjective motion perception was suppressed during adaptation. We also found that the tactile MAE, measured with our procedure, did not transfer across different hands, which implies that the observed MAEs mainly reflect neural adaptations occurring within sensor-specific, tactile-specific processing. The present findings provide a novel behavioral method to analyze the neural representation for directional remapping of tactile motion within tactile sensory processing in the human brain.



Dynamic, object-based remapping of visual features in trans-saccadic perception

D. Melcher

J Vis  8  2.1-17  (2008)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=19146303

Saccadic eye movements can dramatically change the location in which an object is projected onto the retina. One mechanism that might potentially underlie the perception of stable objects, despite the occurrence of saccades, is the "remapping" of receptive fields around the time of saccadic eye movements. Here we examined two possible models of trans-saccadic remapping of visual features: (1) spatiotopic coordinates that remain constant across saccades or (2) an object-based remapping in retinal coordinates. We used form adaptation to test "object" and "space" based predictions for an adapter that changed spatial and/or retinal location due to eye movements, object motion or manual displacement using a computer mouse. The predictability and speed of the object motion was also manipulated. The main finding was that maximum transfer of the form aftereffect in retinal coordinates occurred when there was a saccade and when the object motion was attended and predictable. A small transfer was also found when observers moved the object across the screen using a computer mouse. The overall pattern of results is consistent with the theory of object-based remapping for salient stimuli. Thus, the active updating of the location and features of attended objects may play a role in perceptual stability.



Perceptual evidence for saccadic updating of color stimuli

M. Wittenberg and F. Bremmer and T. Wachtler

J Vis  8  9.1-9  (2008)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=19146310

In retinotopically organized areas of the macaque visual cortex, neurons have been found that shift their receptive fields before a saccade to their postsaccadic position. This saccadic remapping has been interpreted as a mechanism contributing to perceptual stability of space across eye movements. So far, there is only limited evidence for similar mechanisms that support perceptual stability of visual objects by remapping the representation of object features across saccades. In our present study, we investigated whether color stimuli presented before a saccade affected the perception of color stimuli at the same spatial position after the saccade. We found that the perceived hue of a postsaccadically flashed stimulus was systematically shifted toward the color of a presaccadically presented stimulus. This finding would be in accordance with a saccadic remapping process that preactivates, prior to a saccade, the neurons that represent a stimulus after the saccade at this very location. Such a remapping of visual object features could contribute to the stable perception of the visual world across saccades.



Perceptual stability during dramatic changes in olfactory bulb activation maps and dramatic declines in activation amplitudes

R. Homma and L. B. Cohen and E. K. Kosmidis and S. L. Youngentob

Eur J Neurosci  29  1027-34  (2009)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=19291227

We compared the concentration dependence of the ability of rats to identify odorants with the calcium signals in the nerve terminals of the olfactory receptor neurons. Although identification performance decreased with concentrations both above and below the training stimuli it remained well above random at all concentrations tested (between 0.0006% and 35% of saturated vapor). In contrast, the calcium signals in the same awake animals were much smaller than their maximum values at odorant concentrations <1% of saturated vapor. In addition, maps of activated glomeruli changed dramatically as odorant concentration was reduced. Thus perceptual stability exists in the face of dramatic changes in both the amplitude and the maps of the input to the olfactory bulb. The data for the concentration dependence of the response of the most sensitive glomeruli for each of five odorants was fitted with a Michaelis-Menten (Hill) equation. The fitted curves were extrapolated to odorant concentrations several orders of magnitude lower the smallest observed signals and suggest that the calcium response at low odorant concentrations is > 1000 times smaller than the response at saturating odorant concentrations. We speculate that only a few spikes in olfactory sensory neurons may be sufficient for correct odorant identification.



The relationship between saccadic suppression and perceptual stability

T. L. Watson and B. Krekelberg

Curr Biol  19  1040-3  (2009)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=19481454

Introspection makes it clear that we do not see the visual motion generated by our saccadic eye movements. We refer to the lack of awareness of the motion across the retina that is generated by a saccade as saccadic omission [1]: the visual stimulus generated by the saccade is omitted from our subjective awareness. In the laboratory, saccadic omission is often studied by investigating saccadic suppression, the reduction in visual sensitivity before and during a saccade (see Ross et al. [2] and Wurtz [3] for reviews). We investigated whether perceptual stability requires that a mechanism like saccadic suppression removes perisaccadic stimuli from visual processing to prevent their presumed harmful effect on perceptual stability [4, 5]. Our results show that a stimulus that undergoes saccadic omission can nevertheless generate a shape contrast illusion. This illusion can be generated when the inducer and test stimulus are separated in space and is therefore thought to be generated at a later stage of visual processing [6]. This shows that perceptual stability is attained without removing stimuli from processing and suggests a conceptually new view of perceptual stability in which perisaccadic stimuli are processed by the early visual system, but these signals are prevented from reaching awareness at a later stage of processing.



How actions alter sensory processing: reafference in the vestibular system

K. E. Cullen and J. X. Brooks and S. G. Sadeghi

Ann N Y Acad Sci  1164  29-36  (2009)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=19645877

Our vestibular organs are simultaneously activated by our own actions as well as by stimulation from the external world. The ability to distinguish sensory inputs that are a consequence of our own actions (vestibular reafference) from those that result from changes in the external world (vestibular exafference) is essential for perceptual stability and accurate motor control. Recent work in our laboratory has focused on understanding how the brain distinguishes between vestibular reafference and exafference. Single-unit recordings were made in alert rhesus monkeys during passive and voluntary (i.e., active) head movements. We found that neurons in the first central stage of vestibular processing (vestibular nuclei), but not the primary vestibular afferents, can distinguish between active and passive movements. In order to better understand how neurons differentiate active from passive head motion, we systematically tested neuronal responses to different combinations of passive and active motion resulting from rotation of the head-on-body and/or head-and-body in space. We found that during active movements, a cancellation signal was generated when the activation of proprioceptors matched the motor-generated expectation.



The geometry of perisaccadic visual perception

A. Richard and J. Churan and D. E. Guitton and C. C. Pack

J Neurosci  29  10160-70  (2009)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=19675250

Our ability to explore our surroundings requires a combination of high-resolution vision and frequent rotations of the visual axis toward objects of interest. Such gaze shifts are themselves a source of powerful retinal stimulation, and so the visual system appears to have evolved mechanisms to maintain perceptual stability during movements of the eyes in space. The mechanisms underlying this perceptual stability can be probed in the laboratory by briefly presenting a stimulus around the time of a saccadic eye movement and asking subjects to report its position. Under such conditions, there is a systematic misperception of the probes toward the saccade end point. This perisaccadic compression of visual space has been the subject of much research, but few studies have attempted to relate it to specific brain mechanisms. Here, we show that the magnitude of perceptual compression for a wide variety of probe stimuli and saccade amplitudes is quantitatively predicted by a simple heuristic model based on the geometry of retinotopic representations in the primate brain. Specifically, we propose that perisaccadic compression is determined by the distance between the probe and saccade end point on a map that has a logarithmic representation of visual space, similar to those found in numerous cortical and subcortical visual structures. Under this assumption, the psychophysical data on perisaccadic compression can be appreciated intuitively by imagining that, around the time of a saccade, the brain confounds nearby oculomotor and sensory signals while attempting to localize the position of objects in visual space.



Neural dynamics of saccadic suppression

F. Bremmer and M. Kubischik and K.-P. Hoffmann and B. Krekelberg

J Neurosci  29  12374-83  (2009)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=19812313

We make fast, ballistic eye movements called saccades more often than our heart beats. Although every saccade causes a large movement of the image of the environment on our retina, we never perceive this motion. This aspect of perceptual stability is often referred to as saccadic suppression: a reduction of visual sensitivity around the time of saccades. Here, we investigated the neural basis of this perceptual phenomenon with extracellular recordings from awake, behaving monkeys in the middle temporal, medial superior temporal, ventral intraparietal, and lateral intraparietal areas. We found that, in each of these areas, the neural response to a visual stimulus changes around an eye movement. The perisaccadic response changes are qualitatively different in each of these areas, suggesting that they do not arise from a change in a common input area. Importantly, our data show that the suppression in the dorsal stream starts well before the eye movement. This clearly shows that the suppression is not just a consequence of the changes in visual input during the eye movement but rather must involve a process that actively modulates neural activity just before a saccade.



Human thalamus contributes to perceptual stability across eye movements

F. Ostendorf and D. Liebermann and C. J. Ploner

Proc Natl Acad Sci U S A  107  1229-34  (2010)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=20080657

We continuously move our eyes when we inspect a visual scene. Although this leads to a rapid succession of discontinuous and fragmented retinal snapshots, we perceive the world as stable and coherent. Neural mechanisms underlying visual stability may depend on internal monitoring of planned or ongoing eye movements. In the macaque brain, a pathway for the transmission of such signals has been identified that is relayed by central thalamic nuclei. Here, we studied a possible role of this pathway for perceptual stability in a patient with a selective lesion affecting homologous regions of the human thalamus. Compared with controls, the patient exhibited a unilateral deficit in monitoring his eye movements. This deficit was manifest by a systematic inaccuracy both in successive eye movements and in judging the locations of visual stimuli. In addition, perceptual consequences of oculomotor targeting errors were erroneously attributed to external stimulus changes. These findings show that the human brain draws on transthalamic monitoring signals to bridge the perceptual discontinuities generated by our eye movements.



Microsaccadic suppression of visual bursts in the primate superior colliculus

Z. M. Hafed and R. J. Krauzlis

J Neurosci  30  9542-7  (2010)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=20631182

Saccadic suppression, a behavioral phenomenon in which perceptual thresholds are elevated before, during, and after saccadic eye movements, is an important mechanism for maintaining perceptual stability. However, even during fixation, the eyes never remain still, but undergo movements including microsaccades, drift, and tremor. The neural mechanisms for mediating perceptual stability in the face of these "fixational" movements are not fully understood. Here, we investigated one component of such mechanisms: a neural correlate of microsaccadic suppression. We measured the size of short-latency, stimulus-induced visual bursts in superior colliculus neurons of adult, male rhesus macaques. We found that microsaccades caused approximately 30% suppression of the bursts. Suppression started approximately 70 ms before microsaccade onset and ended approximately 70 ms after microsaccade end, a time course similar to behavioral measures of this phenomenon in humans. We also identified a new behavioral effect of microsaccadic suppression on saccadic reaction times, even for continuously presented, suprathreshold visual stimuli. These results provide evidence that the superior colliculus is part of the mechanism for suppressing self-generated visual signals during microsaccades that might otherwise disrupt perceptual stability.



Summation of visual motion across eye movements reflects a nonspatial decision mechanism

A. P. Morris and C. C. Liu and S. J. Cropper and J. D. Forte and B. Krekelberg and J. B. Mattingley

J Neurosci  30  9821-30  (2010)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=20660264

Human vision remains perceptually stable even though retinal inputs change rapidly with each eye movement. Although the neural basis of visual stability remains unknown, a recent psychophysical study pointed to the existence of visual feature-representations anchored in environmental rather than retinal coordinates (e.g., "spatiotopic" receptive fields; Melcher and Morrone, 2003). In that study, sensitivity to a moving stimulus presented after a saccadic eye movement was enhanced when preceded by another moving stimulus at the same spatial location before the saccade. The finding is consistent with spatiotopic sensory integration, but it could also have arisen from a probabilistic improvement in performance due to the presence of more than one motion signal for the perceptual decision. Here we show that this statistical advantage accounts completely for summation effects in this task. We first demonstrate that measurements of summation are confounded by noise related to an observer's uncertainty about motion onset times. When this uncertainty is minimized, comparable summation is observed regardless of whether two motion signals occupy the same or different locations in space, and whether they contain the same or opposite directions of motion. These results are incompatible with the tuning properties of motion-sensitive sensory neurons and provide no evidence for a spatiotopic representation of visual motion. Instead, summation in this context reflects a decision mechanism that uses abstract representations of sensory events to optimize choice behavior.



Stability of the visual world during eye drift

M. Poletti and C. Listorti and M. Rucci

J Neurosci  30  11143-50  (2010)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=20720121

We are normally not aware of the microscopic eye movements that keep the retinal image in motion during visual fixation. In principle, perceptual cancellation of the displacements of the retinal stimulus caused by fixational eye movements could be achieved either by means of motor/proprioceptive information or by inferring eye movements directly from the retinal stimulus. In this study, we examined the mechanisms underlying visual stability during ocular drift, the primary source of retinal image motion during fixation on a stationary scene. By using an accurate system for gaze-contingent display control, we decoupled the eye movements of human observers from the changes in visual input that they normally cause. We show that the visual system relies on the spatiotemporal stimulus on the retina, rather than on extraretinal information, to discard the motion signals resulting from ocular drift. These results have important implications for the establishment of stable visual representations in the brain and argue that failure to visually determine eye drift contributes to well known motion illusions such as autokinesis and induced movement.



Electrophysiological correlates of inter- and intrahemispheric saccade-related updating of visual space

J. Peterburs and K. Gajda and K.-P. Hoffmann and I. Daum and C. Bellebaum

Behav Brain Res  216  496-504  (2011)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=20797412

The process of spatial updating is crucial for maintaining perceptual stability despite gross and frequent displacements of space following saccadic eye movements. Efference copies of motor commands are used to update retinal coordinates across saccades. The present study investigated neural correlates of saccadic updating in a perceptual context with regard to temporal dynamics and modulation by intra- versus interhemispheric transfer of updating-related information. Twenty-two subjects engaged in a perceptual localization task which required trans-saccadic spatial updating while event-related potentials (ERPs) were recorded. In accordance with previous studies, post-saccadic perceptual localization of stimuli presented before a saccade was less accurate when relying on efference copy signals (i.e. updating was required) as compared to a control condition not involving updating. Updating-related ERP components emerged before and after saccade onset. There was no clear transfer-dependent modulation of the presaccadic component. A negative deflection between 30 and 70 ms after saccade onset was most pronounced for rightward saccades, and when intrahemispheric transfer was required. A slower positive deflection starting about 170-230 ms after saccade onset had a shorter latency for leftward than for rightward saccades and was not modulated by transfer. In accordance with previous work, this relative positivity is thought to reflect sensory memory, whereas the earlier negative deflection can be more directly linked to the updating process itself.



Computational models of spatial updating in peri-saccadic perception

F. H. Hamker and M. Zirnsak and A. Ziesche and M. Lappe

Philos Trans R Soc Lond B Biol Sci  366  554-71  (2011)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=21242143

Perceptual phenomena that occur around the time of a saccade, such as peri-saccadic mislocalization or saccadic suppression of displacement, have often been linked to mechanisms of spatial stability. These phenomena are usually regarded as errors in processes of trans-saccadic spatial transformations and they provide important tools to study these processes. However, a true understanding of the underlying brain processes that participate in the preparation for a saccade and in the transfer of information across it requires a closer, more quantitative approach that links different perceptual phenomena with each other and with the functional requirements of ensuring spatial stability. We review a number of computational models of peri-saccadic spatial perception that provide steps in that direction. Although most models are concerned with only specific phenomena, some generalization and interconnection between them can be obtained from a comparison. Our analysis shows how different perceptual effects can coherently be brought together and linked back to neuronal mechanisms on the way to explaining vision across saccades.



Spatiotopic coding and remapping in humans

D. C. Burr and M. C. Morrone

Philos Trans R Soc Lond B Biol Sci  366  504-15  (2011)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=21242139

How our perceptual experience of the world remains stable and continuous in the face of continuous rapid eye movements still remains a mystery. This review discusses some recent progress towards understanding the neural and psychophysical processes that accompany these eye movements. We firstly report recent evidence from imaging studies in humans showing that many brain regions are tuned in spatiotopic coordinates, but only for items that are actively attended. We then describe a series of experiments measuring the spatial and temporal phenomena that occur around the time of saccades, and discuss how these could be related to visual stability. Finally, we introduce the concept of the spatio-temporal receptive field to describe the local spatiotopicity exhibited by many neurons when the eyes move.



Neuronal mechanisms for visual stability: progress and problems

R. H. Wurtz and W. M. Joiner and R. A. Berman

Philos Trans R Soc Lond B Biol Sci  366  492-503  (2011)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=21242138

How our vision remains stable in spite of the interruptions produced by saccadic eye movements has been a repeatedly revisited perceptual puzzle. The major hypothesis is that a corollary discharge (CD) or efference copy signal provides information that the eye has moved, and this information is used to compensate for the motion. There has been progress in the search for neuronal correlates of such a CD in the monkey brain, the best animal model of the human visual system. In this article, we briefly summarize the evidence for a CD pathway to frontal cortex, and then consider four questions on the relation of neuronal mechanisms in the monkey brain to stable visual perception. First, how can we determine whether the neuronal activity is related to stable visual perception? Second, is the activity a possible neuronal correlate of the proposed transsaccadic memory hypothesis of visual stability? Third, are the neuronal mechanisms modified by visual attention and does our perceived visual stability actually result from neuronal mechanisms related primarily to the central visual field? Fourth, does the pathway from superior colliculus through the pulvinar nucleus to visual cortex contribute to visual stability through suppression of the visual blur produced by saccades?



Internal models of self-motion: computations that suppress vestibular reafference in early vestibular processing

K. E. Cullen and J. X. Brooks and M. Jamali and J. Carriot and C. Massot

Exp Brain Res  210  377-88  (2011)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=21286693

In everyday life, vestibular sensors are activated by both self-generated and externally applied head movements. The ability to distinguish inputs that are a consequence of our own actions (i.e., active motion) from those that result from changes in the external world (i.e., passive or unexpected motion) is essential for perceptual stability and accurate motor control. Recent work has made progress toward understanding how the brain distinguishes between these two kinds of sensory inputs. We have performed a series of experiments in which single-unit recordings were made from vestibular afferents and central neurons in alert macaque monkeys during rotation and translation. Vestibular afferents showed no differences in firing variability or sensitivity during active movements when compared to passive movements. In contrast, the analyses of neuronal firing rates revealed that neurons at the first central stage of vestibular processing (i.e., in the vestibular nuclei) were effectively less sensitive to active motion. Notably, however, this ability to distinguish between active and passive motion was not a general feature of early central processing, but rather was a characteristic of a distinct group of neurons known to contribute to postural control and spatial orientation. Our most recent studies have addressed how vestibular and proprioceptive inputs are integrated in the vestibular cerebellum, a region likely to be involved in generating an internal model of self-motion. We propose that this multimodal integration within the vestibular cerebellum is required for eliminating self-generated vestibular information from the subsequent computation of orientation and posture control at the first central stage of processing.



Anticipatory saccade target processing and the presaccadic transfer of visual features

M. Zirnsak and R. G. K. Gerhards and R. Kiani and M. Lappe and F. H. Hamker

J Neurosci  31  17887-91  (2011)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=22159103

As we shift our gaze to explore the visual world, information enters cortex in a sequence of successive snapshots, interrupted by phases of blur. Our experience, in contrast, appears like a movie of a continuous stream of objects embedded in a stable world. This perception of stability across eye movements has been linked to changes in spatial sensitivity of visual neurons anticipating the upcoming saccade, often referred to as shifting receptive fields (Duhamel et al., 1992; Walker et al., 1995; Umeno and Goldberg, 1997; Nakamura and Colby, 2002). How exactly these receptive field dynamics contribute to perceptual stability is currently not clear. Anticipatory receptive field shifts toward the future, postsaccadic position may bridge the transient perisaccadic epoch (Sommer and Wurtz, 2006; Wurtz, 2008; Melcher and Colby, 2008). Alternatively, a presaccadic shift of receptive fields toward the saccade target area (Tolias et al., 2001) may serve to focus visual resources onto the most relevant objects in the postsaccadic scene (Hamker et al., 2008). In this view, shifts of feature detectors serve to facilitate the processing of the peripheral visual content before it is foveated. While this conception is consistent with previous observations on receptive field dynamics and on perisaccadic compression (Ross et al., 1997; Morrone et al., 1997; Kaiser and Lappe, 2004), it predicts that receptive fields beyond the saccade target shift toward the saccade target rather than in the direction of the saccade. We have tested this prediction in human observers via the presaccadic transfer of the tilt-aftereffect (Melcher, 2007).



The spatial distribution of receptive field changes in a model of peri-saccadic perception: predictive remapping and shifts towards the saccade target

M. Zirnsak and M. Lappe and F. H. Hamker

Vision Res  50  1328-37  (2010)

/net/pooh/mnt/raid0/Reprints/PMID/20152853.pdf


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=20152853

At the time of an impending saccade receptive fields (RFs) undergo dynamic changes, that is, their spatial profile is altered. This phenomenon has been observed in several monkey visual areas. Although their link to eye movements is obvious, neither the exact pattern nor their function is fully clear. Several RF shifts have been interpreted in terms of predictive remapping mediating visual stability. In particular, even prior to saccade onset some cells become responsive to stimuli presented in their future, post-saccadic RF. In visual area V4, however, the overall effect of RF dynamics consists of a shrinkage and shift of RFs towards the saccade target. These observations have been linked to a pre-saccadically enhanced processing of the future fixation. In order to better understand these seemingly different outcomes, we analyzed the RF shifts predicted by a recently proposed computational model of peri-saccadic perception (Hamker, Zirnsak, Calow, & Lappe, 2008). This model unifies peri-saccadic compression, pre-saccadic attention shifts, and peri-saccadic receptive field dynamics in a common framework of oculomotor reentry signals in extrastriate visual cortical maps. According to the simulations that we present in the current paper, a spatially selective oculomotor feedback signal leads to RF dynamics which are both consistent with the observations made in studies aiming to investigate predictive remapping and saccade target shifts. Thus, the seemingly distinct experimental observations could be grounded in the same neural mechanism leading to different RF dynamics dependent on the location of the RF in visual space.



Visual stability based on remapping of attention pointers

P. Cavanagh and A. R. Hunt and A. Afraz and M. Rolfs

Trends Cogn Sci  14  147-53  (2010)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=20189870

When we move our eyes, we easily keep track of where relevant things are in the world. Recent proposals link this stability to the shifting of receptive fields of neurons in eye movement and attention control areas. Reports of 'spatiotopic' visual aftereffects have also been claimed to support this shifting connectivity even at an early level, but these results have been challenged. Here, the process of updating visual location is described as predictive shifts of location 'pointers' to attended targets, analogous to predictive activation seen cross-modally. We argue that these location pointers, the core operators of spatial attention, are linked to identity information and that such a link is necessary to establish a workable visual architecture and to explain frequently reported positive spatiotopic biases.



Robustness of the retinotopic attentional trace after eye movements

J. D. Golomb and V. Z. Pulido and A. R. Albrecht and M. M. Chun and J. A. Mazer

J Vis  10  19.1-12  (2010)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=20377296

With each eye movement, the image received by the visual system changes drastically. To maintain stable spatiotopic (world-centered) representations, the relevant retinotopic (eye-centered) coordinates must be continually updated. Although updating or remapping of visual scene representations can occur very rapidly, J. D. Golomb, M. M. Chun, and J. A. Mazer (2008) demonstrated that representations of sustained attention update more slowly than the remapping literature would predict; attentional benefits at previously attended retinotopic locations linger after completion of the saccade, even when this location is no longer behaviorally relevant. The present study explores the robustness of this "retinotopic attentional trace." We report significant retinotopic facilitation despite attempts to eliminate or reduce it by enhancing spatiotopic reference frames with permanent visual cues in the stimulus display and by introducing a different task where the attended location is the saccade target itself. Our results support and extend our earlier model of native retinotopically organized salience maps that must be dynamically updated to reflect the task-relevant spatiotopic location with each saccade. Consistent with the idea that attentional facilitation arises from persistent, recurrent neural activity, it takes measurable time for this facilitation to decay, leaving behind a retinotopic attentional trace after the saccade has been executed, regardless of conflicting task demands.



Spatial updating in monkey superior colliculus in the absence of the forebrain commissures: dissociation between superficial and intermediate layers

C. A. Dunn and N. J. Hall and C. L. Colby

J Neurophysiol  104  1267-85  (2010)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=20610793

In previous studies, we demonstrated that the forebrain commissures are the primary pathway for remapping from one hemifield to the other. Nonetheless, remapping in lateral intraparietal cortex (LIP) across hemifield is still present in split brain monkeys. This finding indicates that a subcortical structure must contribute to remapping. The primary goal of the current study was to characterize remapping activity in the superior colliculus in intact and split brain monkeys. We recorded neurons in both the superficial and intermediate layers of the SC. We found that across-hemifield remapping was reduced in magnitude and delayed compared with within-hemifield remapping in the intermediate layers of the SC in split brain monkeys. These results mirror our previous findings in area LIP. In contrast, we found no difference in the magnitude or latency for within- compared with across-hemifield remapping in the superficial layers. At the behavioral level, we compared the performance of the monkeys on two conditions of a double-step task. When the second target remained within a single hemifield, performance remained accurate. When the second target had to be updated across hemifields, the split brain monkeys' performance was impaired. Remapping activity in the intermediate layers was correlated with the accuracy and latency of the second saccade during the across-hemifield trials. Remapping in the superficial layers was correlated with latency of the second saccade during the within- and across-hemifield trials. The differences between the layers suggest that different circuits underlie remapping in the superficial and intermediate layers of the superior colliculus.



Saccadic foveation of a moving visual target in the rhesus monkey

J. Fleuriet and S. Hugues and L. Perrinet and L. Goffart

J Neurophysiol  105  883-95  (2011)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=21160007

When generating a saccade toward a moving target, the target displacement that occurs during the period spanning from its detection to the saccade end must be taken into account to accurately foveate the target and to initiate its pursuit. Previous studies have shown that these saccades are characterized by a lower peak velocity and a prolonged deceleration phase. In some cases, a second peak eye velocity appears during the deceleration phase, presumably reflecting the late influence of a mechanism that compensates for the target displacement occurring before saccade end. The goal of this work was to further determine in the head restrained monkey the dynamics of this putative compensatory mechanism. A step-ramp paradigm, where the target motion was orthogonal to a target step occurring along the primary axes, was used to estimate from the generated saccades: a component induced by the target step and another one induced by the target motion. Resulting oblique saccades were compared with saccades to a static target with matched horizontal and vertical amplitudes. This study permitted to estimate the time taken for visual motion-related signals to update the programming and execution of saccades. The amplitude of the motion-related component was slightly hypometric with an undershoot that increased with target speed. Moreover, it matched with the eccentricity that the target had 40-60 ms before saccade end. The lack of significant difference in the delay between the onsets of the horizontal and vertical components between saccades directed toward a static target and those aimed at a moving target questions the late influence of the compensatory mechanism. The results are discussed within the framework of the "dual drive" and "remapping" hypotheses.



Predictive remapping of attention across eye movements

M. Rolfs and D. Jonikaitis and H. Deubel and P. Cavanagh

Nat Neurosci  14  252-6  (2011)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=21186360

Many cells in retinotopic brain areas increase their activity when saccades (rapid eye movements) are about to bring stimuli into their receptive fields. Although previous work has attempted to look at the functional correlates of such predictive remapping, no study has explicitly tested for better attentional performance at the future retinal locations of attended targets. We found that, briefly before the eyes start moving, attention drawn to the targets of upcoming saccades also shifted to those retinal locations that the targets would cover once the eyes had moved, facilitating future movements. This suggests that presaccadic visual attention shifts serve to both improve presaccadic perceptual processing at the target locations and speed subsequent eye movements to their new postsaccadic locations. Predictive remapping of attention provides a sparse, efficient mechanism for keeping track of relevant parts of the scene when frequent rapid eye movements provoke retinal smear and temporal masking.



Remapping attention in multiple object tracking

P. D. L. Howe and T. Drew and Y. Pinto and T. S. Horowitz

Vision Res  51  489-95  (2011)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=21236290

Which coordinate system do we use to track moving objects? In a previous study using smooth pursuit eye movements, we argued that targets are tracked in both retinal (retinotopic) and scene-centered (allocentric) coordinates (Howe, Pinto, & Horowitz, 2010). However, multiple object tracking typically also elicits saccadic eye movements, which may change how object locations are represented. Observers fixated a cross while tracking three targets out of six identical disks confined to move within an imaginary square. The fixation cross alternated between two locations, requiring observers to make repeated saccades. By moving (or not moving) the imaginary square in sync with the fixation cross, we could disrupt either (or both) coordinate systems. Surprisingly, tracking performance was much worse when the objects moved with the fixation cross, although this manipulation preserved the retinal image across saccades, thereby avoiding the visual disruptions normally associated with saccades. Instead, tracking performance was best when the allocentric coordinate system was preserved, suggesting that targets locations are maintained in that coordinate system across saccades. This is consistent with a theoretical framework in which the positions of a small set of attentional pointers are predictively updated in advance of a saccade.



Visual stability and the motion aftereffect: a psychophysical study revealing spatial updating

U. Biber and U. J. Ilg

PLoS One  6  e16265  (2011)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=21298104

Eye movements create an ever-changing image of the world on the retina. In particular, frequent saccades call for a compensatory mechanism to transform the changing visual information into a stable percept. To this end, the brain presumably uses internal copies of motor commands. Electrophysiological recordings of visual neurons in the primate lateral intraparietal cortex, the frontal eye fields, and the superior colliculus suggest that the receptive fields (RFs) of special neurons shift towards their post-saccadic positions before the onset of a saccade. However, the perceptual consequences of these shifts remain controversial. We wanted to test in humans whether a remapping of motion adaptation occurs in visual perception.The motion aftereffect (MAE) occurs after viewing of a moving stimulus as an apparent movement to the opposite direction. We designed a saccade paradigm suitable for revealing pre-saccadic remapping of the MAE. Indeed, a transfer of motion adaptation from pre-saccadic to post-saccadic position could be observed when subjects prepared saccades. In the remapping condition, the strength of the MAE was comparable to the effect measured in a control condition (33$\pm$7% vs. 27$\pm$4%). Contrary, after a saccade or without saccade planning, the MAE was weak or absent when adaptation and test stimulus were located at different retinal locations, i.e. the effect was clearly retinotopic. Regarding visual cognition, our study reveals for the first time predictive remapping of the MAE but no spatiotopic transfer across saccades. Since the cortical sites involved in motion adaptation in primates are most likely the primary visual cortex and the middle temporal area (MT/V5) corresponding to human MT, our results suggest that pre-saccadic remapping extends to these areas, which have been associated with strict retinotopy and therefore with classical RF organization. The pre-saccadic transfer of visual features demonstrated here may be a crucial determinant for a stable percept despite saccades.



Context dependence of receptive field remapping in superior colliculus

J. Churan and D. Guitton and C. C. Pack

J Neurophysiol  106  1862-74  (2011)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=21753030

Our perception of the positions of objects in our surroundings is surprisingly unaffected by movements of the eyes, head, and body. This suggests that the brain has a mechanism for maintaining perceptual stability, based either on the spatial relationships among visible objects or internal copies of its own motor commands. Strong evidence for the latter mechanism comes from the remapping of visual receptive fields that occurs around the time of a saccade. Remapping occurs when a single neuron responds to visual stimuli placed presaccadically in the spatial location that will be occupied by its receptive field after the completion of a saccade. Although evidence for remapping has been found in many brain areas, relatively little is known about how it interacts with sensory context. This interaction is important for understanding perceptual stability more generally, as the brain may rely on extraretinal signals or visual signals to different degrees in different contexts. Here, we have studied the interaction between visual stimulation and remapping by recording from single neurons in the superior colliculus of the macaque monkey, using several different visual stimulus conditions. We find that remapping responses are highly sensitive to low-level visual signals, with the overall luminance of the visual background exerting a particularly powerful influence. Specifically, although remapping was fairly common in complete darkness, such responses were usually decreased or abolished in the presence of modest background illumination. Thus the brain might make use of a strategy that emphasizes visual landmarks over extraretinal signals whenever the former are available.



A lack of anticipatory remapping of retinotopic receptive fields in the middle temporal area

W. S. Ong and J. W. Bisley

J Neurosci  31  10432-6  (2011)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=21775588

The middle temporal (MT) area has traditionally been thought to be a retinotopic area. However, recent functional magnetic resonance imaging and psychophysical evidence have suggested that human MT may have some spatiotopic processing. To gain an understanding of the neural mechanisms underlying this process, we recorded neurons from area MT in awake behaving animals performing a simple saccade task in which a spatially stable moving dot stimulus was presented for 500 ms in one of two locations: the presaccadic receptive field or the postsaccadic receptive field. MT neurons responded as if their receptive fields were purely retinotopic. When the stimulus was placed in the presaccadic receptive field, the response was elevated until the saccade took the stimulus out of the receptive field. When the stimulus was placed in the postsaccadic receptive field, the neuron only began its response after the end of the saccade. No evidence of presaccadic or anticipatory remapping was found. We conclude that gain fields are most likely to be responsible for the spatiotopic signal seen in area MT.



Receptive field positions in area MT during slow eye movements

T. S. Hartmann and F. Bremmer and T. D. Albright and B. Krekelberg

J Neurosci  31  10437-44  (2011)


http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=pubmed&dopt=AbstractPlus&list_uids=21775589

Perceptual stability requires the integration of information across eye movements. We first tested the hypothesis that motion signals are integrated by neurons whose receptive fields (RFs) do not move with the eye but stay fixed in the world. Specifically, we measured the RF properties of neurons in the middle temporal area (MT) of macaques (Macaca mulatta) during the slow phase of optokinetic nystagmus. Using a novel method to estimate RF locations for both spikes and local field potentials, we found that the location on the retina that changed spike rates or local field potentials did not change with eye position; RFs moved with the eye. Second, we tested the hypothesis that neurons link information across eye positions by remapping the retinal location of their RFs to future locations. To test this, we compared RF locations during leftward and rightward slow phases of optokinetic nystagmus. We found no evidence for remapping during slow eye movements; the RF location was not affected by eye-movement direction. Together, our results show that RFs of MT neurons and the aggregate activity reflected in local field potentials are yoked to the eye during slow eye movements. This implies that individual MT neurons do not integrate sensory information from a single position in the world across eye movements. Future research will have to determine whether such integration, and the construction of perceptual stability, takes place in the form of a distributed population code in eye-centered visual cortex or is deferred to downstream areas.