The most recent models of depth cue combination assume that information provided by each cue is processed in isolation. Depth estimates are subsequently combined through a weighted average, where the weights are inversely proportional to the variance of each estimate (Modified Weak Fusion model; Landy et al., 1995, Vision Research, 35-3). These approaches ignore the covariance existing in real-world situations among 2D depth-cues. Specifically, in the case of a rigid transformation, disparity and velocity signals are linearly related to each other such that, if the stimulus is small enough, the ratio between velocities and disparities must be constant at any instant of time. In this study we investigated whether this relationship is utilized in visual processing of depth information. In two experiments (motion parallax and vertical rotation) observers viewed a 3D structure defined by a set of randomly distributed dots in a spherical volume. We asked observers to adjust the depth of a probe dot located at the center of this structure until it was perceived to be co-planar with two comparison dots. The ratio between the velocity and disparity values of the probe dot was kept constant during each adjustment but was varied in five experimental conditions. In only one condition this ratio coincided with the ratio of velocities and disparities of the dots in the structure. If motion and stereo signals are independently combined, velocity and stereo settings of the probe dot should fall on a straight line. On the other hand, if the visual system is sensitive to the co-variation of stereo and motion signals, we expect observers' adjustments to follow a specific non-linear pattern (Di Luca, Domini, Caudek, 2003, Perception, 32-Supplement). Our results are clearly compatible with this second prediction.

Non-linear combination of stereo and motion / M. Di Luca; F. Domini; C. Caudek. - In: JOURNAL OF VISION. - ISSN 1534-7362. - ELETTRONICO. - 4:(2004), pp. article 466-article 466. [10.1167/4.8.466]

Non-linear combination of stereo and motion

CAUDEK, CORRADO
2004

Abstract

The most recent models of depth cue combination assume that information provided by each cue is processed in isolation. Depth estimates are subsequently combined through a weighted average, where the weights are inversely proportional to the variance of each estimate (Modified Weak Fusion model; Landy et al., 1995, Vision Research, 35-3). These approaches ignore the covariance existing in real-world situations among 2D depth-cues. Specifically, in the case of a rigid transformation, disparity and velocity signals are linearly related to each other such that, if the stimulus is small enough, the ratio between velocities and disparities must be constant at any instant of time. In this study we investigated whether this relationship is utilized in visual processing of depth information. In two experiments (motion parallax and vertical rotation) observers viewed a 3D structure defined by a set of randomly distributed dots in a spherical volume. We asked observers to adjust the depth of a probe dot located at the center of this structure until it was perceived to be co-planar with two comparison dots. The ratio between the velocity and disparity values of the probe dot was kept constant during each adjustment but was varied in five experimental conditions. In only one condition this ratio coincided with the ratio of velocities and disparities of the dots in the structure. If motion and stereo signals are independently combined, velocity and stereo settings of the probe dot should fall on a straight line. On the other hand, if the visual system is sensitive to the co-variation of stereo and motion signals, we expect observers' adjustments to follow a specific non-linear pattern (Di Luca, Domini, Caudek, 2003, Perception, 32-Supplement). Our results are clearly compatible with this second prediction.
2004
M. Di Luca; F. Domini; C. Caudek
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/646597
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact