EMICS

CHI'21 workshop on "Eye Movements as an Interface to Cognitive State"

View project on GitHub

Thanks everyone for joining the EMICS workshop and making it a success!

Congratulations to the authors Agata Cybulska, Izabela Krejtz, Krzysztof Krejtz and Andrew T. Duchowski of Evaluating Cognitive Effort With Pupillary Activity When Reading. Evidence from Eye Movements of Primary School Children winning the best paper award, which comes with a Quest headset sponsored by Facebook Reality Labs!

Here you can find all speakers’ talks and spotlight presentations. Accepted papers are linked in the schedule.

Big thanks to all our reviewers for timely and high-quality reviews:
Andrew Duchowski, Aria Wang, Arzu Çöltekin, Calden Wloka, Candace Peacock, Daniel Herrera, Emma Stewart, Immo Schuetz, James Hillis, Joshua Newn, Kara Emery, Krzysztof Krejtz, Matthew Tong, Monica Castelhano, Namrata Srivastava, Phil Guan, Scott Murdison, Seha Kim, Xucong Zhang, Zoya Bylinskii

About

Eye movement recording has been extensively used in HCI and offers the possibility to understand how information is perceived and processed by users. Hardware developments provide the ubiquitous accessibility of eye recording, allowing eye movements to enter common usage as a control modality. Recent A.I. developments provide powerful computational means to make predictions about the user. However, the connection between eye movements and cognitive state has been largely under-exploited in HCI. Despite the rich literature in psychology, a deeper understanding of its usability in practice is still required.

This virtual EMICS workshop will provide an opportunity to discuss possible application scenarios and HCI interfaces to infer users’ mental state from eye movements. It will bring together researchers across disciplines with the goal of expanding shared knowledge, discussing innovative research directions and methods, fostering future collaborations around the use of eye movements as an interface to cognitive state.

Accessibility

Message from CHI 2021 WS chairs:

Mar 2 (update) - Dear potential delegates, please note the workshop organisers are in discussion with the overall workshops chairs (who are discussing with the Accessibility Chairs and the General Chairs) for ACM CHI 2021. Please note the current set of technologies listed should not be a barrier to your participation.

Please apply to attend and we will discuss your accessibility needs with you. As chairs for this particular workshop we are committed to being inclusive so we will work with our delegates to ensure participation for all as CHI 2021 is for all. This might mean we drop our plans to use any particular technology if we cannot find a suitable way to make the experience for delegates inclusive.

Meeting Agenda

This virtual EMICS workshop will take place on Friday May 14, 2021 and it will have two live sessions at two different times such that participants can conveniently join at least one. To facilitate discussion, we will kick off each session with a panel of experts giving 5-minute Q & A related to their pre-recorded talks on various applications of eye movements as an interface to cognitive state.

YouTube playlist of all pre-recorded talks

Session 1

(Friday May 14, 2021 | 6:00-8:00 PST / 9:00-11:00 EST / 15:00-17:00 CET / 22:00-00:00 JST)

Session 2

(Friday May 14, 2021 | 14:00-16:00 PST / 17:00-19:00 EST / 23:00-01:00 CET / 06:00-08:00 JST)

Join us in

Slack group Zotero reference group

Context:

The ability to leverage eye tracking data to infer perceptual and cognitive processes during activities such as reading, driving and medical image examinations (Barnes 2008, Hayhoe 2017, Liversedge & Findlay 2000), has long been used in academic settings. However, as eye tracking systems become ubiquitous across different interfaces (e.g. desktop, mobile and head-mounted displays), new opportunities arise for applying this knowledge to human-computer interaction. This SIG will be used to explore applications that can be made possible by connecting eye movements, cognitive processing, and cognitive state (Duchowski 2002, Fridman 2018, Hergeth 2016, Poole & Ball 2006).

The cognitive state of a user can inform not only whether or not they stay on task, but what type of thinking (explorative, expert, creative) is currently engaged (Eger 2007). With the advent of A.I. in this field, eye movement recordings can be used to decode, guide and encourage different types of cognitive processing (Appel 2018, Bylinskii 2017, Duchowski 2018, Henderson 2013, Wang 2019).

At the same time, the use of eye tracking in HCI has been highly promising for many years, but progress has been slow. In Jacob and Karn’s 2003 review of eye tracking in HCI and usability research, they state, “We see promising research work, but we have not yet seen wide use of these approaches in practice or in the marketplace” and it remains as true today as it was in 2003.

Following our CHI ‘20 EMICS ‘20 virtual SIG meeting, the proposed CHI ‘21 virtual workshop will be used as a space in which researchers from different disciplines (HCI, psychology, A.I., cognitive neuroscience) can interact and strengthen this budding field.
Eye movements can, and have been, used as an input modality on mobile and head-mounted displays; for instance, for text entry or search (Papoutsaki 2017, Sharif 2012). However, the main focus of this SIG is on how eye movement patterns can be used to evaluate user intentions and task difficulty, by providing a window into the user’s cognitive processing and state. The use of eye movements to decode cognitive states could be extended to adaptive interfaces that use, for instance, eye fixations as feedback to guide attention (Holland 2012, Torralba 2006, Matzen 2017), affect processing (Kiefer 2017, Rayner 2009), and provide aids for learning and memory (Vitak 2012, Bylinskii 2015, Hannula & Ranganath 2009). The development of such tools could have some far reaching implications for our society and launch a new innovative approach to human-computer interaction.

Relevance: With the increasing accessibility of eye tracking devices, the growing popularity of mobile applications and head-mounted interfaces, and the development of powerful A.I. algorithms, inferring cognitive states from eye movement recordings has become feasible across many scenarios. These advancements open the possibility of integrating users’ cognitive processing into the design of interactive systems. Many studies have been conducted in psychology to understand the correspondence between eye movements and cognitive processing, however this connection has not yet been fully explored or implemented in HCI. So far it remains unclear how to create meaningful interactions that make full use of the information revealed by eye movements. A fundamental challenge lies in the generalizability of eye movement patterns across different tasks, users, and contexts. With the growing interest in the inference of users’ cognitive states and processing, we see an increased demand for bringing together the Cognitive Science and the HCI communities to share knowledge and explore the full potential of this intersection of research interests and its applications.

Discussion Questions:

  • Which cognitive states can be most reliably and robustly inferred from eye movements in practice?
  • Which applications of eye movements as an interface to cognitive state (EMICS) have already proven successful?
  • What challenges and limitations of current hardware, algorithms, etc. need to be addressed to facilitate future applications of EMICS?
  • How can we best align the interests between academia and industry to solve these challenges?

About the Panelists

Xi Wang is a postdoctoral researcher in the Advanced Interactive Technologies group at ETH Zurich. Her research is broadly concerned with human perception and its applications to computer graphics and human-computer interactions.

Monica Castelhano is an Associate Professor in Psychology at Queen’s University. Dr. Castelhano studies memory and attentional processes in complex visual environments in real and virtual settings using techniques such as VR, eye tracking and electroencephalogram (EEG).

Enkelejda Kasneci is a Professor of Computer Science at the University of Tübingen, and her main research interests are applied machine learning, eye-tracking technology and applications, and driver assistance systems.

Gregory Zelinsky is a Professor of Cognitive Science at Stony Brook University. His research attempts to integrate cognitive, computational, and neuroimaging techniques to better understand a broad range of visual cognitive behaviors, including visual search, object representation, working memory, and scene perception.

Oleg Komogortsev is a Professor at Texas State University. He conducts research in eye tracking with a focus on cyber security (biometrics), health assessment, human computer interaction, usability, and bioengineering.

Bonita Sharif is an Assistant Professor in the Department of Computer Science and Engineering at University of Nebraska-Lincoln, Lincoln, Nebraska USA. Her research interests are in eye-tracking related to software engineering, emotional awareness, software traceability, and software visualization to support maintenance of large systems.

Andrew Duchowski is a professor of Computer Science at Clemson University. He received his baccalaureate (1990) from Simon Fraser University, Burnaby, Canada, and doctorate (1997) from Texas A&M University, College Station, TX, both in Computer Science. His research and teaching interests include visual attention and perception, eye tracking, computer vision, and computer graphics.

Laura Matzen is a Principal Member of the Technical Staff in the Cognitive Science and Systems department at Sandia National Laboratories. Her primary research interests lie in using cognitive neuroscience methods to understand how humans process and remember information while performing complex reasoning tasks.

Alexandra Papoutsaki is an Assistant Professor of Computer Science at Pomona College. Her main research focuses on eye tracking and gaze sharing in the context of remote collaboration. She received her PhD from Brown University where she developed a new approach for webcam eye tracking in the browser.

Tilman Dingler is a Research Fellow and Associated Lecturer in the School of Computing and Information Systems at The University of Melbourne. His research focuses on systems that sense, model, and adapt to users’ cognitive states to enable interfaces to support their users’ information processing capabilities.

Organizers

Xi Wang is a postdoctoral researcher in the Advanced Interactive Technologies group at ETH Zurich. She graduated with distinction from TU Berlin and has an M.Eng. degree from Shanghai Jiao Tong University and an M.Sc. degree in computer science from TU Berlin. Her research is broadly concerned with human perception and its applications to computer graphics and human-computer interactions. She has worked on projects studying vergence of eye movements in 3D space, measuring where humans look on real three-dimensional stimuli, and using a mental imagery paradigm to retrieve the mental imagery from eye movements while looking at nothing.

Zoya Bylinskii is a Research Scientist at Adobe Inc. She received a PhD in Electrical Engineering and Computer Science from MIT in September 2018 and an Hon. B.Sc. in Computer Science and Statistics from the University of Toronto in 2012. Zoya is a 2016 Adobe Research Fellow, a 2014-2016 NSERC Postgraduate Scholar, a 2013 Julie Payette Research Scholar, and a 2011 Anita Borg Scholar. Zoya works at the interface of human vision, computer vision, and human-computer interaction: building computational models of people’s memory and attention, and applying the findings to graphic designs and data visualizations.

Monica Castelhano is an Associate Professor in Psychology at Queen’s University. Dr. Castelhano studies memory and attentional processes in complex visual environments in real and virtual settings using techniques such as VR, eye tracking and electroencephalogram (EEG). Dr. Castelhano has been granted numerous awards including the Early Researcher Award from OSF and numerous grants from national and international funding agencies. She is currently an Associate Editor at the Quarterly Journal of Experimental Psychology.

James Hillis is a Research Scientist at Facebook Reality Labs. He completed a Vision Science PhD in sensory cue combination at UC Berkeley in 2002 and post-doctoral research in color vision at the University of Pennsylvania before joining the faculty at the University of Glasgow in 2006. At the University of Glasgow he headed a lab studying socio-economic decision making and worked on an ESRC funded project on the cognitive neuroscience of social decision making. In 2013, he became a research scientist in the optical and display system division of 3M and in 2016 he joined Oculus Research which then became part of Facebook Reality Labs. James’ research focuses on the development of computational models of interactions between humans and their social and physical environments.

Andrew Duchowski is a professor of Computer Science at Clemson University. He received his baccalaureate (1990) from Simon Fraser University, Burnaby, Canada, and doctorate (1997) from Texas A&M University, College Station, TX, both in Computer Science. His research and teaching interests include visual attention and perception, eye tracking, computer vision, and computer graphics. He is a noted research leader in the field of eye tracking, having produced a corpus of papers and a monograph related to eye tracking research, and has delivered courses and seminars on the subject at international conferences. He maintains Clemson’s eye tracking laboratory, and teaches a regular course on eye tracking methodology attracting students from a variety of disciplines across campus.

Call for papers

The EMICS workshop presents a venue to explore how the connection between eye movements and cognitive processing can be used for applications. We invite papers on topics related to the use of eye movement measures (fixations, saccades, blinks, pupil diameter) for the analysis or prediction of cognitive processing. This could include, but is not limited to, a viewer’s cognitive load or affective state, level of expertise, individual traits, task type, task difficulty, as well as the relationship of eye movements to user interfaces broadly construed (driving, reading, problem solving, navigating, data analysis, coding, creating content, etc.).

We invite submissions of a maximum of 4 pages, excluding references. The length of the submission should be commensurate with the contribution. Submissions will be sent out for review to 2 experts representing the Cognitive Science and HCI communities, who will rate the quality of the submission as well as the topic fit to the workshop. Submissions should follow the ACM Conference Proceedings format. Information about where and when to submit the papers will be posted on https://emics-2021.github.io/EMICS/. At least one author of each accepted paper must attend EMICS and all participants must register for both the workshop and for at least one day of the conference.

Accepted papers will be presented as spotlight presentations at the virtual EMICS workshop. Based on reviewers’ recommendation, selected papers will receive best paper awards, funded by the workshop sponsors. Nine invited speakers from both Cognitive Psychology and HCI will present their work, featuring current developments in the relevant fields.

Important dates:

  • Submission deadline: February 11, 2021 February 18, 2021
  • Notification of acceptance: February 28, 2021 March 3, 2021

How to submit

  • Online Submission System: CMT
  • Submission Format: ACM Master Article Submission Templates (single column; no more than 4 pages, excluding reference). Submissions are anonymous and should not include any author names, affiliations, and contact information in the PDF.

If you have any questions, feel free to reach out to us at emicschi2021@gmail.com.

How to join

The registration for CHI is open now and more details can be found at https://chi2021.acm.org/information/4702.html. Workshop registration is $30 and all workshop attendees are required to register for at least one day of the conference. 

Early registration deadline is March 16 and please be aware of the significant increase of the registration after the deadline. Please register through this page and email us for the workshop registration code at emicschi2021@gmail.com. Looking forward to seeing you all at the workshop!

Please feel free to include relevant works on EMICS to the zotero reference group.

E. Abdulin, O. Komogortsev. User eye fatigue detection via eye movement behavior. CHI’EA (2015).

J. Jacobs, X. Wang, M. Alexa. Keep it simple: Depth-based dynamic adjustment of rendering for head-mounted displays decreases visual comfort. TAP (2019).

N. Abid, J. Maletic, B. Sharif. Using developer eye movements to externalize the mental model used in code summarization tasks. ETRA (2019).

T. Appel, C. Scharinger, P. Gerjets, E. Kasneci. Cross-subject workload classification using pupil-related measures. ETRA (2018).

D. Ballard, M. Hayhoe. Modelling the role of task in the control of gaze. Visual cognition (2009).

S. Brennan, X. Chen, C. A Dickinson, M. Neider, G. Zelinsky. Coordinating cognition: The costs and benefits of shared gaze during collaborative search. Cognition (2008).

Z. Bylinskii, M. Borkin, N. Kim, H. Pfister, A. Oliva. Eye fixation metrics for large scale evaluation and comparison of information visualizations. Eye Tracking and Visualization (2017).

Z. Bylinskii, N. Kim, P. O’Donovan, S. Alsheikh, S. Madan, H. Pfister, F. Durand, B. Russell, A. Hertzmann. Learning visual importance for graphic designs and data visualizations. UIST (2017).

M. Castelhano, M. Mack, J. Henderson. Viewing task influences eye movement control during active scene perception. Journal of vision (2009).

N. Castner, E. Kasneci, T. Kübler, K. Scheiter, J. Richter, T. Eder, F. Hüttig, C. Keutel. Scanpath comparison in medical image reading skills of dental students: distinguishing stages of expertise development. ETRA (2018).

A. Duchowski, K. Krejtz, I. Krejtz, C. Biele, A. Niedzielska, P. Kiefer, M. Raubal, I. Giannopoulos. The index of pupillary activity: measuring cognitive load vis-à-vis task difficulty with pupil oscillation. CHI (2018).

C. Holland, O. Komogortsev, D. Tamir. Identifying usability issues via algorithmic detection of excessive visual search. CHI (2012).

R. Gergely, A. Duchowski, M. Pohl. Designing online tests for a virtual learning environment - evaluation of visual behavior between tasks. International Conference on Human Behavior in Design (2014).

M. Hayhoe. Advances in relating eye movements and cognition. Infancy (2004).

M. Hayhoe. Vision and action. Annual Review of Vision Science (2017).

E. Kasneci, G. Kasneci, T. Kübler, W. Rosenstiel. Online recognition of fixations, saccades, and smooth pursuits for automated analysis of traffic hazard perception. Artificial neural networks (2015).

P. Kiefer, I. Giannopoulos, M. Raubal, A. Duchowski. Eye tracking for spatial research: Cognition, computation, challenges. Spatial Cognition & Computation (2017)

D. Kit, L. Katz, B. Sullivan, K. Snyder, D. Ballard, M. Hayhoe. Eye movements, visual search and scene memory, in an immersive virtual environment. PLoS One (2015).

G. Kütt, K. Lee, E. Hardacre, A. Papoutsaki. Eye-write: Gaze sharing for collaborative writing. CHI (2019).

M. Land, M. Hayhoe. In what ways do eye movements contribute to everyday activities? Vision research (2001).

D. Lohr, S.-H. Berndt, O. Komogortsev. An implementation of eye movement-driven biometrics in virtual reality. ETRA (2018).

L. Matzen, M. Haass, K. Divis, M. Stites. Patterns of attention: How data visualizations are read. International Conference on Augmented Cognition (2017).

A. Papoutsaki, J. Laskey, J. Huang. Searchgazer: Webcam eye tracking for remote studies of web search. CHIIR (2017).

K. Rayner, M. Castelhano, J. Yang. Eye movements when looking at unusual/weird scenes: Are there cultural differences?. Journal of Experimental Psychology (2009).

B. Sharif, M. Falcone, J. Maletic. An eye-tracking study on the role of scan time in finding source code defects. ETRA (2012).

A. Torralba, A. Oliva, M. Castelhano, J. Henderson. Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. Psychological review (2006).

O. Mercier, Y. Sulai, K. Mackenzie, M. Zannoli, J. Hillis, D. Nowrouzezahrai, D. Lanman. Fast gaze-contingent optimal decompositions for multifocal displays. TOG (2017).

S. Vitak, J. Ingram, A. Duchowski, S. Ellis, A. Gramopadhye. Gaze-augmented think-aloud as an aid to learning. CHI (2012).

K. Yun, Y. Peng, D. Samaras, G. Zelinsky, T. Berg. Exploring the role of gaze behavior and object detection in scene understanding. Frontiers in psychology (2013)

X. Wang, A. Ley, S. Koch, D. Lindlbauer, J. Hays, K. Holmqvist, M. Alexa. The mental image revealed by gaze tracking. CHI (2019).

G. Zelinsky, Y. Peng, D. Samaras. Eye can read your mind: Decoding gaze fixations to reveal categorical search targets. Journal of vision (2013).

Partners