Human-Computer Interaction
HCI, MTS, and PsyErgo @ CHI 2022
The HCI Group presented four papers at CHI 2022 in New Orleans, USA
AIL AT WORK
The project AIL AT WORK celebrated its official kick-off with the think tank Digitale Arbeitsgesellschaft of the Federal Ministry of Labor and Social Affairs.
HCI Group @ IEEE VR 2022
The HCI Group presented four contributions at the IEEE VR 2022
Anne-Gwenn Bosser on Computational Narratives for Compelling Interactive Experiences
Anne-Gwenn Bosser will hold a talk at the Coputer Science colloquium on the topic of 'Computational Narratives for Compelling Interactive Experiences'. Join the talk on 9th March 2022, 10:00 Uhr (s.t.) at the Zuse-Hörsaal, Informatikgebäude, Am Hubland
Winter EXPO 2021/22
The Winter EXPO 2021/22 for MCS/HCI was a great success! We thank all the contributors and guests for participating.
Show more

Open Positions

Student Workers for the Hyperbolic Space Lab
We have an open position for a motivated student worker in the Hyperbolic Space Lab for the WueDive Project. This projects explores the interaction of the basic properties of curved spaces in VR applications.
Student Workers for the HiAvA Project
We have open positions for motivated student workers in the HiAvA Project. This projects explores the possibilities of XR and AI in a Social Virtual Reality.
Student Workers for the PriMa Avatars Project
We are looking for student workers to help in the Privacy Matters Project!
Student Worker for the HCI side of the VIA-VR Project
We have an open position for a motivated student that is willing to contribute in the VIA-VR Project.
Student Worker for the ViTraS Project Wanted!
We have an open position for a motivated student worker in the ViTraS Project. In this project the possibilities of therapy for body perception disorders via VR and AR are explored.
Open Research and PhD Position (TVL E13 100%)
The HCI-group has an open position for a research assistant (and PhD candidate) in the general area of interactive systems and related research projects, e.g., VR, AR, avatars, or multimodal interfaces.
Student Worker for the ILAST project
We have an open position for a motivated student employee in a very interesting VR project!
Student Workers for the CoTeach Project
We are looking for student workers to help develop and investigate fully immersive learning environments
Student Worker for the XRoads Project
We have an open position for a motivated student that is willing to contribute in the XRoads Project.


Recent Publications

E. Ganal, M. Heimbrock, P. Schaper, B. Lugrin, Don’t Touch This! - Investigating the Potential of Visualizing Touched Surfaces on the Consideration of Behavior Change, In N. Baghaei, J. Vassileva, R. Ali, K. Oyibo (Eds.), Persuasive Technology 2022. Springer International Publishing, 2022.
[BibSonomy]
@inproceedings{ganal2022touch, author = {E. Ganal and M. Heimbrock and P. Schaper and B. Lugrin}, year = {2022}, booktitle = {Persuasive Technology 2022}, editor = {N. Baghaei and J. Vassileva and R. Ali and K. Oyibo}, publisher = {Springer International Publishing}, title = {Don’t Touch This! - Investigating the Potential of Visualizing Touched Surfaces on the Consideration of Behavior Change} }
Abstract:
Philipp Krop, Samantha Straka, Melanie Ullrich, Maximilian Ertl, Marc Erich Latoschik, IT-Supported Request Management for Clinical Radiology: Contextual Design and Remote Prototype Testing, In CHI Conference on Human Factors in Computing Systems Extended Abstracts(45), pp. 1-8. New York, NY, USA: Association for Computing Machinery, 2022.
[Download] [BibSonomy] [Doi]
@inproceedings{krop2022a, author = {Philipp Krop and Samantha Straka and Melanie Ullrich and Maximilian Ertl and Marc Erich Latoschik}, number = {45}, url = {https://dl.acm.org/doi/10.1145/3491101.3503571}, year = {2022}, booktitle = {CHI Conference on Human Factors in Computing Systems Extended Abstracts}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, series = {CHI EA '22}, pages = {1-8}, title = {IT-Supported Request Management for Clinical Radiology: Contextual Design and Remote Prototype Testing} }
Abstract: Management of radiology requests in larger clinical contexts is characterized by a complex and distributed workflow. In our partner hospital, representing many similar clinics, these processes often still rely on exchanging physical papers and forms, making patient or case data challenging to access. This often leads to phone calls with long waiting queues, which are time-inefficient and result in frequent interrupts. We report on a user-centered design approach based on Rapid Contextual Design with an additional focus group to optimize and iteratively develop a new workflow. Participants found our prototypes fast and intuitive, the design clean and consistent, relevant information easy to access, and the request process fast and easy. Due to the COVID pandemic, we switched to remote prototype testing, which yielded equally good feedback and increased the participation rate. In the end, we propose best practices for remote prototype testing in hospitals with complex and distributed workflows.
Chris Zimmerer, Martin Fischbach, Marc Erich Latoschik, A Case Study on the Rapid Development of Natural and Synergistic Multimodal Interfaces for XR Use-Cases, In CHI Conference on Human Factors in Computing Systems Extended Abstracts. New York, NY, USA: Association for Computing Machinery, 2022.
[Download] [BibSonomy] [Doi]
@inproceedings{10.1145/3491101.3503552, author = {Chris Zimmerer and Martin Fischbach and Marc Erich Latoschik}, url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-chi-case-study-mmi-zimmerer.pdf}, year = {2022}, booktitle = {CHI Conference on Human Factors in Computing Systems Extended Abstracts}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, series = {CHI EA '22}, title = {A Case Study on the Rapid Development of Natural and Synergistic Multimodal Interfaces for XR Use-Cases} }
Abstract: Multimodal Interfaces (MMIs) supporting the synergistic use of natural modalities like speech and gesture have been conceived as promising for spatial or 3D interactions, e.g., in Virtual, Augmented, and Mixed Reality (XR for short). Yet, the currently prevailing user interfaces are unimodal. Commercially available software platforms like the Unity or Unreal game engines simplify the complexity of developing XR applications through appropriate tool support. They provide ready-to-use device integration, e.g., for 3D controllers or motion tracking, and according interaction techniques such as menus, (3D) point-and-click, or even simple symbolic gestures to rapidly develop unimodal interfaces. A comparable tool support is yet missing for multimodal solutions in this and similar areas. We believe that this hinders user-centered research based on rapid prototyping of MMIs, the identification and formulation of practical design guidelines, the development of killer applications highlighting the power of MMIs, and ultimately a widespread adoption of MMIs. This article investigates potential reasons for the ongoing uncommonness of MMIs. Our case study illustrates and analyzes lessons learned during the development and application of a toolchain that supports rapid development of natural and synergistic MMIs for XR use-cases. We analyze the toolchain in terms of developer usability, development time, and MMI customization. This analysis is based on the knowledge gained in years of research and academic education. Specifically, it reflects on the development of appropriate MMI tools and their application in various demo use-cases, in user-centered research, and in the lab work of a mandatory MMI course of an HCI master’s program. The derived insights highlight successful choices made as well as potential areas for improvement.
Chris Zimmerer, Philipp Krop, Martin Fischbach, Marc Erich Latoschik, Reducing the Cognitive Load of Playing a Digital Tabletop Game with a Multimodal Interface, In CHI Conference on Human Factors in Computing Systems. New York, NY, USA: Association for Computing Machinery, 2022.
[Download] [BibSonomy] [Doi]
@inproceedings{10.1145/3491102.3502062, author = {Chris Zimmerer and Philipp Krop and Martin Fischbach and Marc Erich Latoschik}, url = {https://dl.acm.org/doi/10.1145/3491102.3502062}, year = {2022}, booktitle = {CHI Conference on Human Factors in Computing Systems}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, series = {CHI '22}, title = {Reducing the Cognitive Load of Playing a Digital Tabletop Game with a Multimodal Interface} }
Abstract: Multimodal Interfaces (MMIs) combining speech and spatial input have the potential to elicit minimal cognitive load. Low cognitive load increases effectiveness as well as user satisfaction and is regarded as an important aspect of intuitive use. While this potential has been extensively theorized in the research community, experiments that provide supporting observations based on functional interfaces are still scarce. In particular, there is a lack of studies comparing the commonly used Unimodal Interfaces (UMIs) with theoretically superior synergistic MMI alternatives. Yet, these studies are an essential prerequisite for generalizing results, developing practice-oriented guidelines, and ultimately exploiting the potential of MMIs in a broader range of applications. This work contributes a novel observation towards the resolution of this shortcoming in the context of the following combination of applied interaction techniques, tasks, application domain, and technology: We present a comprehensive evaluation of a synergistic speech & touch MMI and a touch-only menu-based UMI (interaction techniques) for selection and system control tasks in a digital tabletop game (application domain) on an interactive surface (technology). Cognitive load, user experience, and intuitive use are evaluated, with the former being assessed by means of the dual-task paradigm. Our experiment shows that the implemented MMI causes significantly less cognitive load and is perceived significantly more usable and intuitive than the UMI. Based on our results, we derive recommendations for the interface design of digital tabletop games on interactive surfaces. Further, we argue that our results and design recommendations are suitable to be generalized to other application domains on interactive surfaces for selection and system control tasks.
Yann Glémarec, Jean-Luc Lugrin, Anne-Gwenn Bosser, Cedric Buche, Marc Erich Latoschik, Controlling the STAGE: A High-Level Control System for Virtual Audiences In Virtual Reality, In Frontiers in Virtual Reality – Virtual Reality and Human Behaviour. 2022.
[Download] [BibSonomy]
@article{glemarec2022controlling, author = {Yann Glémarec and Jean-Luc Lugrin and Anne-Gwenn Bosser and Cedric Buche and Marc Erich Latoschik}, journal = {Frontiers in Virtual Reality – Virtual Reality and Human Behaviour}, url = {https://www.frontiersin.org/articles/10.3389/frvir.2022.876433/abstract}, year = {2022}, title = {Controlling the STAGE: A High-Level Control System for Virtual Audiences In Virtual Reality} }
Abstract: This article presents a novel method for controlling a virtual audience system (VAS) in Virtual Reality (VR) application, called STAGE, which has been originally designed for supervised public speaking training in university seminars dedicated to the preparation and delivery of scientific talks. We are interested in creating pedagogical narratives: narratives encompass affective phenomena and rather than organizing events changing the course of a training scenario, pedagogical plans using our system focus on organizing the affects it arouses for the trainees. Efficiently controlling a virtual audience towards a specific training objective while evaluating the speaker's performance presents a challenge for a seminar instructor: the high level of cognitive and physical demands required to be able to control the virtual audience, whilst evaluating speaker's performance, adjusting and allowing it to quickly react to the user’s behaviors and interactions. It is indeed a critical limitation of a number of existing systems that they rely on a Wizard of Oz approach, where the tutor drives the audience in reaction to the user’s performance. We address this problem by integrating with a VAS a high-level control component for tutors, which allows using predefined audience behavior rules, defining custom ones, as well as intervening during run-time for finer control of the unfolding of the pedagogical plan. At its core, this component offers a tool to program, select, modify and monitor interactive training narratives using a high-level representation. The STAGE offers the following features: i) a high-level API to program pedagogical narratives focusing on a specific public speaking situation and training objectives, ii) an interactive visualization interface iii) computation and visualization of user metrics, iv) a semi-autonomous virtual audience composed of virtual spectators with automatic reactions to the speaker and surrounding spectators while following the pedagogical plan V) and the possibility for the instructor to embody a virtual spectator to ask questions or guide the speaker from within the Virtual Environment. We present here the design, implementation of the tutoring system and its integration in STAGE, and discuss its reception by end-users.
Christian Schell, Fabian Sieper, Marc E. Latoschik, Who is Alyx?. 2022.
[Download] [BibSonomy] [Doi]
@dataset{who_is_alyx_2022, author = {Christian Schell and Fabian Sieper and Marc E. Latoschik}, url = {https://github.com/cschell/who-is-alyx}, year = {2022}, title = {Who is Alyx?} }
Abstract: This dataset contains over 57 hours of motion and eye-tracking data from 37 players of the virtual reality game "Half-Life: Alyx". Each player played the game on two separate days for about 45 minutes using a HTC Vive Pro.
Rebecca Hein, Jeanine Steinbock, Maria Eisenmann, Carolin Wienrich, Marc Erich Latoschik, Virtual Reality im modernen Englischunterricht und das Potenzial für Inter- und Transkulturelles Lernen, In MedienPädagogik Zeitschrift für Theorie und Praxis der Medienbildung, Vol. 47, pp. 246-266. 2022.
[Download] [BibSonomy] [Doi]
@article{hein2021medienpaed, author = {Rebecca Hein and Jeanine Steinbock and Maria Eisenmann and Carolin Wienrich and Marc Erich Latoschik}, journal = {MedienPädagogik Zeitschrift für Theorie und Praxis der Medienbildung}, url = {https://www.researchgate.net/publication/359903684_Virtual_Reality_im_modernen_Englischunterricht_und_das_Potenzial_fur_Inter-_und_Transkulturelles_Lernen/references}, year = {2022}, pages = {246-266}, title = {Virtual Reality im modernen Englischunterricht und das Potenzial für Inter- und Transkulturelles Lernen} }
Abstract:
Sophia C. Steinhaeusser, Birgit Lugrin, Effects of Colored LEDs in Robotic Storytelling on Storytelling Experience and Robot Perception, In Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction (HRI2022). IEEE Press, 2022.
[BibSonomy]
@inproceedings{steinhaeusser2022effects, author = {Sophia C. Steinhaeusser and Birgit Lugrin}, year = {2022}, booktitle = {Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction (HRI2022)}, publisher = {IEEE Press}, title = {Effects of Colored LEDs in Robotic Storytelling on Storytelling Experience and Robot Perception} }
Abstract:
See all publications here
Legal Information