File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Scanpath modeling and classification with Hidden Markov Models

TitleScanpath modeling and classification with Hidden Markov Models
Authors
KeywordsClassification
Eye movements
Hidden Markov models
Machine-learning
Scanpath
Toolbox
Issue Date2018
PublisherSpringer Verlag, co-published with Psychonomic Society. The Journal's web site is located at http://brm.psychonomic-journals.org/
Citation
Behavior Research Methods, 2018, v. 50 n. 1, p. 362-379 How to Cite?
AbstractHow people look at visual information reveals fundamental information about them; their interests and their states of mind. Previous studies showed that scanpath, i.e., the sequence of eye movements made by an observer exploring a visual stimulus, can be used to infer observer-related (e.g., task at hand) and stimuli-related (e.g., image semantic category) information. However, eye movements are complex signals and many of these studies rely on limited gaze descriptors and bespoke datasets. Here, we provide a turnkey method for scanpath modeling and classification. This method relies on variational hidden Markov models (HMMs) and discriminant analysis (DA). HMMs encapsulate the dynamic and individualistic dimensions of gaze behavior, allowing DA to capture systematic patterns diagnostic of a given class of observers and/or stimuli. We test our approach on two very different datasets. Firstly, we use fixations recorded while viewing 800 static natural scene images, and infer an observer-related characteristic: the task at hand. We achieve an average of 55.9% correct classification rate (chance = 33%). We show that correct classification rates positively correlate with the number of salient regions present in the stimuli. Secondly, we use eye positions recorded while viewing 15 conversational videos, and infer a stimulus-related characteristic: the presence or absence of original soundtrack. We achieve an average 81.2% correct classification rate (chance = 50%). HMMs allow to integrate bottom-up, top-down, and oculomotor influences into a single model of gaze behavior. This synergistic approach between behavior and machine learning will open new avenues for simple quantification of gazing behavior. We release SMAC with HMM, a Matlab toolbox freely available to the community under an open-source license agreement.
Persistent Identifierhttp://hdl.handle.net/10722/244722
ISSN
2021 Impact Factor: 5.953
2020 SCImago Journal Rankings: 3.042
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorCoutrot, A-
dc.contributor.authorHsiao, JHW-
dc.contributor.authorChan, AB-
dc.date.accessioned2017-09-18T01:57:50Z-
dc.date.available2017-09-18T01:57:50Z-
dc.date.issued2018-
dc.identifier.citationBehavior Research Methods, 2018, v. 50 n. 1, p. 362-379-
dc.identifier.issn1554-351X-
dc.identifier.urihttp://hdl.handle.net/10722/244722-
dc.description.abstractHow people look at visual information reveals fundamental information about them; their interests and their states of mind. Previous studies showed that scanpath, i.e., the sequence of eye movements made by an observer exploring a visual stimulus, can be used to infer observer-related (e.g., task at hand) and stimuli-related (e.g., image semantic category) information. However, eye movements are complex signals and many of these studies rely on limited gaze descriptors and bespoke datasets. Here, we provide a turnkey method for scanpath modeling and classification. This method relies on variational hidden Markov models (HMMs) and discriminant analysis (DA). HMMs encapsulate the dynamic and individualistic dimensions of gaze behavior, allowing DA to capture systematic patterns diagnostic of a given class of observers and/or stimuli. We test our approach on two very different datasets. Firstly, we use fixations recorded while viewing 800 static natural scene images, and infer an observer-related characteristic: the task at hand. We achieve an average of 55.9% correct classification rate (chance = 33%). We show that correct classification rates positively correlate with the number of salient regions present in the stimuli. Secondly, we use eye positions recorded while viewing 15 conversational videos, and infer a stimulus-related characteristic: the presence or absence of original soundtrack. We achieve an average 81.2% correct classification rate (chance = 50%). HMMs allow to integrate bottom-up, top-down, and oculomotor influences into a single model of gaze behavior. This synergistic approach between behavior and machine learning will open new avenues for simple quantification of gazing behavior. We release SMAC with HMM, a Matlab toolbox freely available to the community under an open-source license agreement.-
dc.languageeng-
dc.publisherSpringer Verlag, co-published with Psychonomic Society. The Journal's web site is located at http://brm.psychonomic-journals.org/-
dc.relation.ispartofBehavior Research Methods-
dc.rightsThe final publication is available at Springer via http://dx.doi.org/10.3758/s13428-017-0876-8-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectClassification-
dc.subjectEye movements-
dc.subjectHidden Markov models-
dc.subjectMachine-learning-
dc.subjectScanpath-
dc.subjectToolbox-
dc.titleScanpath modeling and classification with Hidden Markov Models-
dc.typeArticle-
dc.identifier.emailHsiao, JHW: jhsiao@hku.hk-
dc.identifier.authorityHsiao, JHW=rp00632-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.3758/s13428-017-0876-8-
dc.identifier.scopuseid_2-s2.0-85017464983-
dc.identifier.hkuros276069-
dc.identifier.volume50-
dc.identifier.issue1-
dc.identifier.spage362-
dc.identifier.epage379-
dc.identifier.isiWOS:000424922400024-
dc.publisher.placeUnited States-
dc.identifier.issnl1554-351X-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats