File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

postgraduate thesis: The perception of object motion during self-motion

TitleThe perception of object motion during self-motion
Authors
Issue Date2013
PublisherThe University of Hong Kong (Pokfulam, Hong Kong)
Citation
Niehorster, D. C.. (2013). The perception of object motion during self-motion. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. Retrieved from http://dx.doi.org/10.5353/th_b5177318
AbstractWhen we stand still and do not move our eyes and head, the motion of an object in the world or the absence thereof is directly given by the motion or quiescence of the retinal image. Self-motion through the world however complicates this retinal image. During self-motion, the whole retinal image undergoes coherent global motion, called optic flow. Self-motion therefore causes the retinal motion of objects moving in the world to be confounded by a motion component due to self-motion. How then do we perceive the motion of an object in the world when we ourselves are also moving? Although non-visual information about self-motion, such as provided by efference copies of motor commands and vestibular stimulation, might play a role in this ability, it has recently been shown that the brain possesses a purely visual mechanism that underlies scene-relative object motion perception during self-motion. In the flow parsing hypothesis developed by Rushton and Warren (2005; Warren & Rushton, 2007; 2009b), the brain uses its sensitivity to optic flow to detect and globally remove retinal motion due to self-motion and recover the scene-relative motion of objects. Research into this perceptual ability has so far been of a qualitative nature. In this thesis, I therefore develop a retinal motion nulling paradigm to measure the gain with which the flow parsing mechanism uses the optic flow to remove the self-motion component from an object’s retinal motion. I use this paradigm to investigate how accurate scene-relative object motion perception during self-motion can be based on only visual information, whether this flow parsing process depends on a percept of the direction of self-motion and the tuning of flow parsing, i.e., how it is modulated by changes in various stimulus aspects. The results reveal that although adding monocular or binocular depth information to the display to precisely specify the moving object’s 3D position in the scene improved the accuracy of flow parsing, the flow parsing gain was never up to the extent required by the scene geometry. Furthermore, the flow parsing gain was lower at higher eccentricities from the focus of expansion in the flow field and was strongly modulated by changes in the motion angle between the self-motion and object motion components in the retinal motion of the moving object, the speeds of these components and the density of the flow field. Lastly, flow parsing was not affected by illusory changes in the perceived direction of self-motion. In conclusion, visual information alone is not sufficient for accurate perception of scene-relative object motion during self-motion. Furthermore, flow parsing takes the 3D position of the moving object in the scene into account and is not a uniform global subtraction process. 8e observed tuning characteristics are different from those of local perceived motion interactions, providing evidence that flow parsing is a separate process from these local motion interactions. Finally, flow parsing does not depend on a prior percept of self-motion direction and instead directly uses the input retinal motion to construct percepts of scene-relative object motion during self-motion.
DegreeDoctor of Philosophy
SubjectMotion perception (Vision)
Dept/ProgramPsychology
Persistent Identifierhttp://hdl.handle.net/10722/196466
HKU Library Item IDb5177318

 

DC FieldValueLanguage
dc.contributor.authorNiehorster, Diederick Christian-
dc.date.accessioned2014-04-11T23:14:27Z-
dc.date.available2014-04-11T23:14:27Z-
dc.date.issued2013-
dc.identifier.citationNiehorster, D. C.. (2013). The perception of object motion during self-motion. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. Retrieved from http://dx.doi.org/10.5353/th_b5177318-
dc.identifier.urihttp://hdl.handle.net/10722/196466-
dc.description.abstractWhen we stand still and do not move our eyes and head, the motion of an object in the world or the absence thereof is directly given by the motion or quiescence of the retinal image. Self-motion through the world however complicates this retinal image. During self-motion, the whole retinal image undergoes coherent global motion, called optic flow. Self-motion therefore causes the retinal motion of objects moving in the world to be confounded by a motion component due to self-motion. How then do we perceive the motion of an object in the world when we ourselves are also moving? Although non-visual information about self-motion, such as provided by efference copies of motor commands and vestibular stimulation, might play a role in this ability, it has recently been shown that the brain possesses a purely visual mechanism that underlies scene-relative object motion perception during self-motion. In the flow parsing hypothesis developed by Rushton and Warren (2005; Warren & Rushton, 2007; 2009b), the brain uses its sensitivity to optic flow to detect and globally remove retinal motion due to self-motion and recover the scene-relative motion of objects. Research into this perceptual ability has so far been of a qualitative nature. In this thesis, I therefore develop a retinal motion nulling paradigm to measure the gain with which the flow parsing mechanism uses the optic flow to remove the self-motion component from an object’s retinal motion. I use this paradigm to investigate how accurate scene-relative object motion perception during self-motion can be based on only visual information, whether this flow parsing process depends on a percept of the direction of self-motion and the tuning of flow parsing, i.e., how it is modulated by changes in various stimulus aspects. The results reveal that although adding monocular or binocular depth information to the display to precisely specify the moving object’s 3D position in the scene improved the accuracy of flow parsing, the flow parsing gain was never up to the extent required by the scene geometry. Furthermore, the flow parsing gain was lower at higher eccentricities from the focus of expansion in the flow field and was strongly modulated by changes in the motion angle between the self-motion and object motion components in the retinal motion of the moving object, the speeds of these components and the density of the flow field. Lastly, flow parsing was not affected by illusory changes in the perceived direction of self-motion. In conclusion, visual information alone is not sufficient for accurate perception of scene-relative object motion during self-motion. Furthermore, flow parsing takes the 3D position of the moving object in the scene into account and is not a uniform global subtraction process. 8e observed tuning characteristics are different from those of local perceived motion interactions, providing evidence that flow parsing is a separate process from these local motion interactions. Finally, flow parsing does not depend on a prior percept of self-motion direction and instead directly uses the input retinal motion to construct percepts of scene-relative object motion during self-motion.-
dc.languageeng-
dc.publisherThe University of Hong Kong (Pokfulam, Hong Kong)-
dc.relation.ispartofHKU Theses Online (HKUTO)-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.rightsThe author retains all proprietary rights, (such as patent rights) and the right to use in future works.-
dc.subject.lcshMotion perception (Vision)-
dc.titleThe perception of object motion during self-motion-
dc.typePG_Thesis-
dc.identifier.hkulb5177318-
dc.description.thesisnameDoctor of Philosophy-
dc.description.thesislevelDoctoral-
dc.description.thesisdisciplinePsychology-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.5353/th_b5177318-
dc.identifier.mmsid991036761849703414-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats