To test whether perceived shape from shading, texture and motion is affine, we asked participants to compare the curvature at the tip of two surfaces of revolution with quadratic profile. The first surface was defined by shading or motion and the second was defined by texture information. This match was obtained by keeping constant the texture surface, and by varying the illumination direction for the shading surface, and the angular rotation for the motion surface. If the 3D shapes perceived from shading, motion or texture are related to the simulated surface by an affine stretching, then our procedure should produce identical values of perceived curvature, depth and slant also for all other local patches of the three surfaces. Our empirical results, however, show that this is not the case. This implies that the recovered 3D shapes from shading, texture and motion are not related to the simulated 3D surface by an affine transformation. These results are compatible with the hypothesis that the local analysis of image signals specifies different 3D properties. Shading specifies only local curvature; texture local slant and curvature; motion local curvature, slant and depth. Slant and depth from shading, and depth from texture can only be computed through spatial integration, which necessarily introduces noise in the recovery process. Therefore we expected that the perceived values of slant and depth from shading, and depth from texture will be smaller and less reliable then those specified by motion information. Empirical results confirm this hypothesis.
Depth cues do not specify a unique Affine or Euclidean shape representation / M.Di Luca; F.Domini; C.Caudek. - In: JOURNAL OF VISION. - ISSN 1534-7362. - ELETTRONICO. - 6:(2006), pp. article 340-article 340. [10.1167/6.6.340]
Depth cues do not specify a unique Affine or Euclidean shape representation
CAUDEK, CORRADO
2006
Abstract
To test whether perceived shape from shading, texture and motion is affine, we asked participants to compare the curvature at the tip of two surfaces of revolution with quadratic profile. The first surface was defined by shading or motion and the second was defined by texture information. This match was obtained by keeping constant the texture surface, and by varying the illumination direction for the shading surface, and the angular rotation for the motion surface. If the 3D shapes perceived from shading, motion or texture are related to the simulated surface by an affine stretching, then our procedure should produce identical values of perceived curvature, depth and slant also for all other local patches of the three surfaces. Our empirical results, however, show that this is not the case. This implies that the recovered 3D shapes from shading, texture and motion are not related to the simulated 3D surface by an affine transformation. These results are compatible with the hypothesis that the local analysis of image signals specifies different 3D properties. Shading specifies only local curvature; texture local slant and curvature; motion local curvature, slant and depth. Slant and depth from shading, and depth from texture can only be computed through spatial integration, which necessarily introduces noise in the recovery process. Therefore we expected that the perceived values of slant and depth from shading, and depth from texture will be smaller and less reliable then those specified by motion information. Empirical results confirm this hypothesis.I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.