Conventional stereoscopic displays present images on a single focal plane. The resulting mismatch between the stimuli to the eyes' focusing response (accommodation) and to convergence causes fatigue and poor stereo performance. One solution is to distribute image intensity across a number of widely spaced image planes-a technique referred to as depth filtering. Previously, we found this elicits accurate, continuous monocular accommodation responses with image-plane separations as large as 1.1 Diopters (D, the reciprocal of distance in meters), suggesting that a small number of image planes could eliminate vergence-accommodation conflicts over a large range of simulated distances. Evidence exists, however, of systematic differences between accommodation responses to binocular and monocular stimuli when the stimulus to accommodation is degraded, or at an incorrect distance. We examined the minimum image-plane spacing required for accurate accommodation to binocular depth-filtered images. We compared accommodation and vergence responses to changes in depth specified by depth filtering, using image-plane separations of 0.6 to 1.2 D, and equivalent real stimuli. Accommodation responses to real and depth-filtered stimuli were equivalent for image-plane separations of ∼ 0.6 to 0.9 D, but differed thereafter. We conclude that depth filtering can be used to precisely match accommodation and vergence demand in a practical stereoscopic display.
Conventional stereoscopic displays present conflicting stimuli to vergence and accommodation, causing fatigue,
discomfort, and poor stereo depth perception. One promising solution is 'depth filtering', in which continuous variations
in focal distance are simulated by distributing image intensity across multiple focal planes. The required image-plane
spacing is a critical parameter, because there are constraints on the total number that can be used. Depth-filtered images
have been shown to support continuous and reasonably accurate accommodation responses with 1.1 dioptre (D) image-plane
spacings. However, retinal contrast is increasingly attenuated with increasing image-plane separation. Thus, while
such stimuli may eliminate the vergence-accommodation conflict, they may also unacceptably degrade stereoscopic
depth perception. Here we measured stereoacuity, and the time needed for stereoscopic fusion, for real targets and depth-filtered
approximations to the same stimuli (image-plane spacings of 0.6, 0.9 and 1.2 D). Stereo fusion time was
reasonably consistent across conditions. Stereoacuity for depth-filtered stimuli was only slightly poorer than for real
targets with 0.6 D image-plane separation, but deteriorated rapidly thereafter. Our results suggest that stereoscopic depth
perception, not accommodation and vergence responses, is the limiting factor in determining acceptable image-plane
spacing for depth-filtered images. We suggest that image-plane spacing should ideally not exceed ~0.6 D.
Conventional stereoscopic displays present images on a single focal plane. The resulting mismatch between the stimuli
to the eyes' focusing response (accommodation) and to convergence causes fatigue and poor stereo performance. One
promising solution is to distribute image intensity across a number of relatively widely spaced image planes - a
technique referred to as depth filtering. Previously, we found this elicits accurate, continuous monocular accommodation
responses with image-plane separations as large as 1.1 Diopters, suggesting that a relatively small (i.e. practical)
number of image planes is sufficient to eliminate vergence-accommodation conflicts over a large range of simulated
distances. However, accommodation responses have been found to overshoot systematically when the same stimuli are
viewed binocularly. Here, we examined the minimum image-plane spacing required for accurate accommodation to
binocular depth-filtered images. We compared accommodation and vergence responses to step changes in depth for
depth-filtered stimuli, using image-plane separations of 0.6-1.2 D, and equivalent real stimuli. Accommodation
responses to real and depth-filtered stimuli were equivalent for image-plane separations of ~0.6-0.9 D, but inaccurate
thereafter. We conclude that depth filtering can be used to precisely match accommodation and vergence demand in a
practical stereoscopic display, using a relatively small number of image planes.
Conventional stereoscopic displays present images at a fixed focal distance. Depth variations in the depicted scene
therefore result in conflicts between the stimuli to vergence and to accommodation. The resulting decoupling of
accommodation and vergence responses can cause adverse consequences, including reduced stereo performance,
difficulty fusing binocular images, and fatigue and discomfort. These problems could be eliminated if stereo displays
could present correct focus cues. A promising approach to achieving this is to present each eye with a sum of images
presented at multiple focal planes, and to approximate continuous variations in focal distance by distributing light energy
across image planes - a technique referred to as depth-filtering1. Here we describe a novel multi-plane display in which
we can measure accommodation and vergence responses. We report an experiment in which we compare these
oculomotor responses to real stimuli and depth-filtered simulations of the same distance. Vergence responses were
generally similar across conditions. Accommodation responses to depth-filtered images were inaccurate, however,
showing an overshoot of the target, particularly in response to a small step-change in stimulus distance. This is
surprising because we have previously shown that blur-driven accommodation to the same stimuli, viewed monocularly,
is accurate and reliable. We speculate that an initial convergence-driven accommodation response, in combination with a
weaker accommodative stimulus from depth-filtered images, leads to this overshoot. Our results suggest that
stereoscopic multi-plane displays can be effective, but require smaller image-plane separations than monocular
accommodation responses suggest.
Focus cues specify inappropriate 3-D scene parameters in conventional displays because the light comes from a single surface, independent of the depth relations in the portrayed scene. This can lead to distortions in perceived depth, as well as discomfort and fatigue due to the differing demands on accommodation and vergence. Here we examine the efficacy of a stereo-display prototype designed to minimize these problems by using multiple image planes to present near-correct focus cues. Each eye’s view is the sum of several images presented at different focal distances. Image intensities are assigned based on the dioptric distance of each image plane from the portrayed object, determined along visual lines. The stimulus to accommodation is more consistent with the portrayed depth than with conventional displays, but it still differs from the stimulus in equivalent real scenes. Compared to a normal, fixed-distance display, observers showed improved stereoscopic performance in different psychophysical tasks including speed of fusing stereoscopic images, precision of depth discrimination, and accuracy of perceived depth estimates. The multiple image-planes approach provides a practical solution for some shortcomings of conventional displays.
Increasingly, binocular disparity has become commonplace in telepresence systems despite the additional cost of its provision. Experiments comparing performance under monocular and binocular viewing are often cited as justification for its use. Here we question whether this experimental comparison and provide an important set of data which compares performance on a motor task under binocular, monocular and bi-ocular (where both eyes receive the same view) conditions. Binocular cues were found to be particularly important in the control of the transport component. In the binocular conditions peak velocity scaling with object distance was greater than in the other conditions, and in the bi-ocular condition, where the binocular distance cues conflicted with pictorial information, no scaling was evident. For the grasp component, even in the presence of conflicting size and depth information, grip scaling remained equivalent in all conditions. For the transport component at least, binocular cues appear important and the decrease in performance observed in behavioral studies under monocular conditions is not attributable to lack of information in one eye but rather to the lack of binocular depth cues. Therefore in the design of telepresence systems to be used in telemanipulation tasks, the use of stereoscopic display technology seems justified.
Binocular telepresence systems afford the opportunity of increasing the inter-camera distance (ICD) beyond the normal interocular distance (IOD) which magnifies the magnitude of the disparity information. This improves performance in nulling and matching tasks. Here we examine whether telepresent observers can learn to use enhanced disparities to accurately perform tasks requiring the recovery of Euclidean geometry (a shape task). The design comprised three phases: pre-adaptation (ICD equals 6.5 cm), adaptation (ICD equals 3.25 or 13 cm) and post-adaptation (ICD equals 6.5 cm). Telepresent observers were required to adjust the magnitude of a depth interval (specified by binocular disparity) so that it matched a specified 2D interval specified by two lights (set between 5 and 15 cm) in an otherwise blacked-out scene. In the adaptation phase, the ICD/IOD ratio was changed to 0.5 or 2 and observers adjusted the depth interval repeatedly until a performance criterion was reached. Two forms of feedback were given in the adaptation phase: direct, where another light was shown at the correct disparity; and symbolic, where a signed number indicated the magnitude and direction of the error. Observers were clearly affected by ICD/IOD changes but learned the new ratio rapidly under both feedback conditions.
The experiments reported here were designed to address two aims. Th first was to determine the sufficiency of head- generated motion parallax, when present in isolation, for the control of natural prehensile movements. The second was to assess the consequences of providing enhanced parallax information for prehension. Enhanced parallax was created by changing the spatial extend of the movement of a camera relative to the extend of the teleoperator's head movements. The gain ranged from 0.5 to 4. The scene was viewed for 2 secs before reaches were made in open-loop conditions. Results showed clearly that information from motion parallax is sufficient to support reliable and accurate motor movements. The enhanced information, led to predictable distortions in perceived size and distance and corresponding alterations in the transport and grip components. The results suggest that the provision of parallax information is beneficial for tasks requiring the recovery of metric depth information. However, if enhanced parallax is used, which facilitates performance in a range of perceptual tasks, re-calibration of the relative motion information is necessary to prevent size/distance distortions.
The control of inter-camera distance (ICD) can be used to change the range of binocular disparities available from a visual scene viewed remotely. Binocular disparity is considered pre-eminent in the control of reaching behavior. One reason for this is that once suitably scaled it can specify metrical depth relationships within a scene. Such information is necessary in order to plan the transport and grasped phase of a reaching movement. However whether an observer can take advantage of enhanced disparities to control reaching is unknown. Here we examine the effects of manipulating ICD on reaching movements with ICDs ranging from 6.5cm to 26cm. Typically sized, real world objects were placed in a scene and reaching performance was assessed. An experimental sequence consisted of three blocks. The first and last block used a normal ICD/IOD of 6.5cm whereas the middle block used an increased ICD. Larger than normal ICD were found to disrupt reaching performance, with slower peak velocities and smaller grip apertures being observed. This was more pronounced for unfamiliar objects. Little evidence for learning was found.
Acquisition of the skill of medical image inspection could be due to changes in visual search processes, 'low-level' sensory learning, and higher level 'conceptual learning.' Here, we report two studies that investigate the extent to which learning in medical image inspection involves low- level learning. Early in the visual processing pathway cells are selective for direction of luminance contrast. We exploit this in the present studies by using transfer across direction of contrast as a 'marker' to indicate the level of processing at which learning occurs. In both studies twelve observers trained for four days at detecting features in x- ray images (experiment one equals discs in the Nijmegen phantom, experiment two equals micro-calcification clusters in digitized mammograms). Half the observers examined negative luminance contrast versions of the images and the remainder examined positive contrast versions. On the fifth day, observers swapped to inspect their respective opposite contrast images. In both experiments leaning occurred across sessions. In experiment one, learning did not transfer across direction of luminance contrast, while in experiment two there was only partial transfer. These findings are consistent with the contention that some of the leaning was localized early in the visual processing pathway. The implications of these results for current medical image inspection training schedules are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.