An understanding of visually evoked smooth eye movements is required to predict the visibility and legibility of
moving displays, such as might be encountered in vehicles like aircraft and automobiles. We have studied the
response of the oculomotor system to various classes of visual stimuli, and analyzed the results separately for
horizontal and vertical version (in which the two eyes move together), and horizontal and vertical vergence (where
they move in opposite directions). Of the four types of motion, only vertical vergence cannot be performed under
voluntary control, and certain stimuli (all having relatively long latencies) are incapable of evoking it. In another
experiment, we instructed observers to track one of two targets, and measured weak but reliable responses to the
unattended target, in which the long-latency component of the response is abolished. Our results are consistent
with a system containing two distinct processes, a fast reflexive process which responds to a restricted class
of stimuli, and a slower voluntary process capable of following anything that can be seen, but incapable of
controlling vertical vergence.
Space operations present the human visual system with a wide dynamic range of images from faint stars and starlit
shadows to un-attenuated sunlight. Lunar operations near the poles will result in low sun angles, exacerbating visual
problems associated with shadowing and glare. We discuss the perceptual challenges these conditions will present to the
human explorers, and consider some possible mitigations and countermeasures. We also discuss the problems of
simulating these conditions for realistic training.
This paper considers the problem of delivering calibrated images over the web with the precision appropriate
for psychophysical experimentation. We are interested only in methods that might be employed by a remote
participant possessing nothing other than a computer terminal. Therefore, we consider only purely psychophysical
methods not requiring any measurement instruments or standards. Because of this limitation, there are certain
things we can not determine, the most significant of which is the absolute luminance. We present solutions
for three particular problems: linearization, also known as gamma correction; determination of the relative
luminances of the display primaries; and colorimetry, i.e. determining the chromaticity of the primaries.
Adapting to the visual characteristics of a specific environment may facilitate detecting novel stimuli within that environment. We monitored eye movements while subjects searched for a color target on familiar or unfamiliar color backgrounds, in order to test for these performance changes and to explore whether they reflect changes in salience from adaptation vs. changes in search strategies or perceptual learning. The target was an ellipse of variable color presented at a random location on a dense background of ellipses. In one condition, the colors of the background varied along either the LvsM or SvsLM cardinal axes. Observers adapted by viewing a rapid succession of backgrounds drawn from one color axis, and then searched for a target on a background from the same or different color axis. Searches were monitored with a Cambridge Research Systems Video Eyetracker. Targets were located more quickly on the background axis that observers were pre-exposed to, confirming that this exposure can improve search efficiency for stimuli that differ from the background. However, eye movement patterns (e.g. fixation durations and saccade magnitudes) did not clearly differ across the two backgrounds, suggesting that how the novel and familiar backgrounds were sampled remained similar. In a second condition, we compared search on a nonselective color background drawn from a circle of hues at fixed contrast. Prior exposure to this background did not facilitate search compared to an achromatic adapting field, suggesting that subjects were not simply learning the specific colors defining the background distributions. Instead, results for both conditions are consistent with a selective adaptation effect that enhances the salience of novel stimuli by partially discounting the background.
In the fall of 2003, a series of flight tests were performed in the Tullahoma, Tennessee area to assess the ability of non-instrument rated helicopter pilots to fly precision routes with the aid of a Global Positioning System (GPS) receiver. The navigation performance of pilot subjects was assessed from GPS recordings of the flight trajectory, while pilot behavior was recorded using four video cameras, two of which were attached to a goggle frame worn by the pilot. This paper describes the processing methods developed for these data, and presents some preliminary results.
KEYWORDS: Video, Data modeling, Visualization, Video compression, Spatial filters, Video processing, Spatial frequencies, Visual process modeling, Human vision and color perception, Error analysis
The growth of digital video has given rise to a need for computational methods for evaluating the visual quality of digital video. We have developed a new digital video quality metric, which we call DVQ. Here we provide a brief description of the metric, and give a preliminary report on its performance. DVQ accepts a pair of digital video sequences, and computes a measure of the magnitude of the visible difference between them. The metric is based on the Discrete Cosine Transform. It incorporates aspects of early visual processing, including light adaptation, luminance and chromatic channels, spatial and temporal filtering, spatial frequency channels, contrast masking, and probability summation. It also includes primitive dynamics of light adaptation and contrast masking. We have applied the metric to digital video sequences corrupted by various typical compression artifacts, and compared the results to quality ratings made by human observers.
The term error diffusion has been used in the halftoning literature to describe processes in which pixels' quantization errors are spread in space to their unquantized neighbors, causing neighboring errors to be negatively correlated and relatively invisible. The general principle may be extended to the time dimension as well, which we will refer to as temporal error diffusion. In this paper we consider the use of temporal error diffusion to ameliorate the errors introduced by JPEG image compression of a stream of images.
When models of human vision adequately measure the relative quality of candidate halftonings of an image, the problem of halftoning the image becomes equivalent to the search problem of finding a halftone that optimizes the quality metric. Because of the vast number of possible halftones, and the complexity of image quality measures, this principled approach has usually been put aside in favor of fast algorithms that seem to perform well. We find that the principled approach can lead to a range of useful halftoning algorithms, as we trade off speed for quality by varying the complexity of the quality measure and the thoroughness of the search. High quality halftones can be obtained reasonably quickly, for example, by using as a measure the vector length of the error image filtered by a contrast sensitivity function, and, as the search procedure the sequential adjustment of individual pixels to improve the quality measure. If computational resources permit, simulated annealing can find nearly optimal solutions.
KEYWORDS: Sensors, Visualization, Human vision and color perception, Visual process modeling, Calibration, Image filtering, Spatial frequencies, Algorithm development, Image processing, Linear filtering
A network learning algorithm is presented that computes interpolation functions that can compensate for weakened, jittered, or missing elements of a sensor array. The algorithm corrects errors in translation invariance, so prior knowledge of the input images is not required.
An algorithm is described for learning image interpolation functions for sensor arrays
whose sensor positions are somewhat disordered. The learning is based on failures of
translation invariance, so it does not require knowledge of the images being presented to the
visual system. Previously reported implementations of the method assumed the visual system
to have precise knowledge of the translations. We demonstrate here that translation estimates
computed from the imperfectly interpolated images can have enough accuracy to allow the
learning process to converge to a correct interpolation.
A method is described for reducing the visibility of artifacts arising in the display of
quantized color images on CRT displays. The method is based on the differential spatial sensitivity
of the human visual system to chromatic and achromatic modulations. Because the
visual system has the highest spatial and temporal acuity for the luminance component of an
image, we seek a technique which will reduce luminance artifacts at the expense of introducing
high-frequency chromatic errors. In this paper we explore a method based on controlling
the correlations between the quantization errors in the individual phosphor images. The luminance
component is greatest when the phosphor errors are positively correlated, and is minimized
when the phosphor errors are negatively correlated. The greatest effect of the correlation
is obtained when the intensity quantization step sizes of the individual phosphors have equal
luminances. For the ordered dither algorithm, a version of the method can be implemented by
simply inverting the matrix of thresholds for one of the color components.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.