DAY 1 (11.04.)
Welcome: 12:30 – 13:00
Session 1: 13:00 – 14:30
Session chair: Johannes Schöning
Usability of Gamified Knowledge Learning in VR and Desktop-3D.
Sebastian Oberdörfer, David Heidrich, Marc Erich Latoschik
Show Abstract
Affine Transformations (ATs) often escape an intuitive approach due to their high complexity. Therefore, we developed GEtiT that directly encodes ATs in its game mechanics and scales the knowledge’s level of abstraction. This results in an intuitive application as well as audiovisual presentation of ATs and hence in a knowledge learning. We also developed a specific Virtual Reality (VR) version to explore the effects of immersive VR on the learning outcomes. This paper presents our approach of directly encoding abstract knowledge in game mechanics, the conceptual design of GEtiT and its technical implementation. Both versions are compared in regard to their usability in a user study. The results show that both GEtiT versions induce a high degree of flow and elicit a good intuitive use. They validate the effectiveness of the design and the resulting knowledge application requirements. Participants favored GEtiT VR thus showing a potentially higher learning quality when using VR.
A Design Space for Gaze Interaction on Head-Mounted Displays.
Teresa Hirzle, Jan Gugenheimer, Florian Geiselhart, Andreas Bulling, Enrico Rukzio
Show Abstract
Augmented and virtual reality (AR/VR) has entered the mass market and, with it, will soon eye tracking as a core technology for next generation head-mounted displays (HMDs). In contrast to existing gaze interfaces, the 3D nature of AR and VR requires estimating a user’s gaze in 3D. While first applications, such as foveated rendering, hint at the compelling potential of combining HMDs and gaze, a systematic analysis is missing. To fill this gap, we present the first design space for gaze interaction on HMDs. Our design space covers human depth perception and technical requirements in two dimensions aiming to identify challenges and opportunities for interaction design. As such, our design space provides a comprehensive overview and serves as an important guideline for researchers and practitioners working on gaze interaction on HMDs. We further demonstrate how our design space is used in practice by presenting two interactive applications: EyeHealth and XRay-Vision.
HappyPermi: Presenting Critical Data Flows in Mobile Application to Raise User Security Awareness [LBW].
Mehrdad Bahrini, Nina Wenig, Marcel Meissner, Karsten Sohr, Rainer Malaka
Show Abstract
Malicious Android applications can obtain user’s private data and silently send it to a server. Android permissions are currently not sufficient enough to ensure the security of users’ sensitive information. For a sufficient permission model it is important to account the target of the outgoing data flow. On the other hand, permission dialogues often contain relevant information, but most of the users generally do not understand the implications or the visualization fails to guide the user attention to it. It is important to empower users by providing applications that show them who can access their private data and who might send this data to the outside. In order to raise user awareness considering Android permissions, we developed HappyPermi, an application that visualizes which user information is accessible by the granted permissions. Our evaluation (n=20) shows that most users are not aware of the sensitive data that their installed applications have access to. Our results suggest how different users feel about accessing their sensitive data when they are aware of its outgoing destinations.
Coffee Break (14:30 – 15:00)
Session 2: 15:00 – 16:30
Session chair: Florian Alt
The Role of Physical Props in VR Climbing Environments.
Peter Schulz, Dmitry Alexandrovsky, Felix Putze, Rainer Malaka and Johannes Schöning
Show Abstract
Dealing with fear of falling is a challenge in sport climbing. Virtual reality (VR) research suggests that using physical and reality-based interaction increases the presence in VR. In this paper, we present a study that investigates the influence of physical props on presence, stress and anxiety in a VR climbing environment involving whole body movement. To help climbers overcoming fear of falling, we compared three different conditions: Climbing in reality at 10 m height, physical climbing in VR (with props attached to the climbing wall) and virtual climbing in VR using game controllers. From subjective reports and biosignals, our results show that climbing with props in VR increases the anxiety and sense of realism in VR for sport climbing. This suggests that VR in combination with physical props are an effective simulation setup to induce the sense of height.
Assessing the Accuracy of Pointing Techniques with Curved Trajectories and Orientation Indication for Virtual Reality Locomotion.
Markus Funk, Florian Müller, Marco Fendrich, Megan Shene, Moritz Kolvenbach, Niclas Dobbertin, Sebastian Günther, Max Mühlhäuser
Show Abstract
Room-scale Virtual Reality (VR) systems have arrived in users’ homes where tracked environments are set up in limited physical spaces. As most Virtual Environments (VEs) are larger than the tracked physical space, locomotion techniques are used to navigate in VEs. Currently, in recent VR games, point&teleport is the most popular locomotion technique. However, it only allows users to select the position of the teleportation and not the orientation that the user is facing after the teleport. This results in users having to manually correct their orientation after teleporting and possibly getting entangled by the cable of the headset. In this paper, we introduce and evaluate three different point&teleport techniques that enable users to specify the target orientation while teleporting. The results show that, although the three teleportation techniques with orientation indication increase the average teleportation time, they lead to a decreased need for correcting the orientation after teleportation.
Around the (Virtual) World – Infinite Walking in Virtual Reality Using Electrical Muscle Stimulation.
Jonas Auda, Max Pascher, Stefan Schneegass
Does It Feel Real? Using Tangibles with Different Fidelities to Build and Explore Scenes in Virtual Reality.
Thomas Muender, Anke V. Reinschluessel, Sean Drewes, Dirk Wenig, Tanja Döring, Rainer Malaka
Show Abstract
Professionals in domains like film, theater, or architecture often rely on physical models to visualize spaces. With virtual reality (VR) new tools are available providing immersive experiences with correct perceptions of depth and scale. However, these lack the tangibility of physical models. Using tangible objects in VR can close this gap but creates the challenges of producing suitable objects and interacting with them with only the virtual objects visible. This work ad- dresses these challenges by evaluating tangibles with three haptic fidelities: equal disc-shaped tangibles for all virtual objects, Lego-built tangibles, and 3D-printed tangibles resembling the virtual shapes. We present results from a comparative study on immersion, performance, and intuitive interaction and interviews with domain experts. The results show that 3D-printed objects perform best, but Lego offers a good trade-off between fast creation of tangibles and sufficient fidelity. The experts rate our approach as useful and would use all three versions.
Coffee Break (16:30 – 17:00)
Session 3: 17:00 – 18:30
Session chair: Marc Erich Latoschik
Enhancing Texture Perception in Virtual Reality Using 3D-Printed Hair Structures.
Donald Degraen, André Zenner, Antonio Krüger
Show Abstract
Experiencing materials in virtual reality (VR) is enhanced by combining visual and haptic feedback. While VR easily allows changes to visual appearances, modifying haptic impressions remains challenging. Existing passive haptic techniques require access to a large set of tangible proxies. To reduce the number of physical representations, we look towards fabrication to create more versatile counterparts. In a user study, 3D-printed hairs with length varying in steps of 2.5 mm were used to influence the feeling of roughness and hardness. By overlaying fabricated hair with visual textures, the resolution of the user’s haptic perception increased. As changing haptic sensations are able to elicit perceptual switches, our approach can extend a limited set of textures to a much broader set of material impressions. Our results give insights into the effectiveness of 3D-printed hair for enhancing texture perception in VR.
Multi-Modal Approaches for Post-Editing Machine Translation.
Nico Herbig, Santanu Pal, Josef van Genabith, Antonio Krüger
Show Abstract
Current advances in machine translation increase the need for translators to switch from traditional translation to post-editing (PE) of machine-translated text, a process that saves time and improves quality. This affects the design of translation interfaces, as the task changes from mainly generating text to correcting errors within otherwise helpful translation proposals. Our results of an elicitation study with professional translators indicate that a combination of pen, touch, and speech could well support common PE tasks, and received high subjective ratings by our participants. Therefore, we argue that future translation environment research should focus more strongly on these modalities in addition to mouse- and keyboard-based approaches. On the other hand, eye tracking and gesture modalities seem less important. An additional interview regarding interface design revealed that most translators would also see value in automatically receiving additional resources when a high cognitive load is detected during PE.
Using Time and Space Efficiently in Driverless Cars: Findings of a Co-Design Study.
Gunnar Stevens, Paul Bossauer, Stephanie Vonholdt, Christina Pakusch
Show Abstract
NaviBike: Comparing Unimodal Navigation Cues for Child Cyclists.
Andrii Matviienko, Swamy Ananthanarayan, Abdallah El Ali, Wilko Heuten, and Susanne Boll
Show Abstract
Navigation systems for cyclists are commonly screen-based devices mounted on the handlebar which show map information. Typically, adult cyclists have to explicitly look down for directions. This can be distracting and challenging for children, given their developmental differences in motor and perceptual-motor abilities compared with adults. To address this issue, we designed different unimodal cues and explored their suitability for child cyclists through two experiments. In the first experiment, we developed an indoor bicycle simulator and compared auditory, light, and vibrotactile navigation cues. In the second experiment, we investigated these navigation cues in-situ in an outdoor practice test track using a mid-size tricycle. To simulate road distractions, children were given an additional auditory task in both experiments. We found that auditory navigational cues were the most understandable and the least prone to navigation errors. However, light and vibrotactile cues might be useful for educating younger child cyclists.
Social Event: 19:30 (OLs Brauhaus: Stau 25-27, 26122 Oldenburg )
DAY 2 (12.04.)
Session 4: 09:00 – 10:30
Session chair: Stefan Schneegaß
./trilaterate: A Fabrication Pipeline to Design and 3D Print Hover-, Touch-, and Force-Sensitive Objects.
Martin Schmitz, Martin Stitz, Florian Müller, Markus Funk, Max Mühlhäuser
Show Abstract
Hover, touch, and force are promising input modalities that get increasingly integrated into screens and everyday objects. However, these interactions are often limited to flat surfaces and the integration of suitable sensors is time-consuming and costly. To alleviate these limitations, we contribute Trilaterate: A fabrication pipeline to 3D print custom objects that detect the 3D position of a finger hovering, touching, or forcing them by combining multiple capacitance measurements via capacitive trilateration. Trilaterate places and routes actively-shielded sensors inside the object and operates on consumer-level 3D printers. We present technical evaluations and example applications that validate and demonstrate the wide applicability of Trilaterate.
Like a Second Skin: Understanding How Epidermal Devices Affect Human Tactile Perception.
Aditya Shekhar Nittala, Klaus Kruttwig, Jaeyeon Lee, Roland Bennewitz, Eduard Arzt, Jürgen Steimle
Show Abstract
The emerging class of epidermal devices opens up new opportunities for skin-based sensing, computing, and interaction. Future design of these devices requires an understanding of how skin-worn devices affect the natural tactile perception. In this study, we approach this research challenge by proposing a novel classification system for epidermal devices based on flexural rigidity and by testing advanced adhesive materials, including tattoo paper and thin films of poly (dimethylsiloxane) (PDMS). We report on the results of three psychophysical experiments that investigated the effect of epidermal devices of different rigidity on passive and active tactile perception. We analyzed human tactile sensitivity thresholds, two-point discrimination thresholds, and roughness discrimination abilities on three different body locations (fingertip, hand, forearm). Generally, a correlation was found between device rigidity and tactile sensitivity thresholds as well as roughness discrimination ability. Surprisingly, thin epidermal devices based on PDMS with a hundred times the rigidity of commonly used tattoo paper resulted in comparable levels of tactile acuity. The material offers the benefit of increased robustness against wear and the option to re-use the device. Based on our findings, we derive design recommendations for epidermal devices that combine tactile perception with device robustness.
Grasping Microgestures Eliciting Single-hand Microgestures for Handheld Objects.
Adwait Sharma, Joan Sol Roo, and Jürgen Steimle
Show Abstract
Single-hand microgestures have been recognized for their potential to support direct and subtle interactions. While pioneering work has investigated sensing techniques and presented first sets of intuitive gestures, we still lack a systematic understanding of the complex relationship between microgestures and various types of grasps. This paper presents results from a user elicitation study of microgestures that are performed while the user is holding an object. We present an analysis of over 2,400 microgestures performed by 20 participants, using six different types of grasp and a total of 12 representative handheld objects of varied geometries and size. We expand the existing elicitation method by proposing statistical clustering on the elicited gestures. We contribute detailed results on how grasps and object geometries affect single-hand microgestures, preferred locations, and fingers used. We also present consolidated gesture sets for different grasps and object size. From our findings, we derive recommendations for the design of microgestures compatible with a large variety of handheld objects.
ScaleDial: A Novel Tangible Device for Teaching Musical Scales & Triads [DEMO].
Konstantin Klamka, Jannik Wojnar, Raimund Dachselt
Show Abstract
The teaching of harmonic foundations in music is a common learning objective in many education systems. However, music theory is often considered as a non-interactive subject matter that requires huge efforts to understand. With this work, we contribute a novel tangible device, called ScaleDial, that makes use of the relations between geometry and music theory to provide interactive, graspable and playful learning experiences. Therefore, we introduce an innovative tangible cylinder and demonstrate how harmonic relationships can be explored through a physical set of digital manipulatives, that can be arranged and stacked on top of an interactive chromatic circle. Based on the tangible interaction and further rich visual and auditory output capabilities, ScaleDial enables a better understanding of scales, pitch constellations, triads, as well as intervals. Further, we describe the technical realization of our advanced prototype and show how we fabricate the magnetic, capacitive and mechanical sensing.
Coffee Break (10:30 – 11:00)
Session 5: 11:00 – 12:30
Session chair: Wilko Heuten
Guerilla Warfare and the Use of New (and Some Old) Technology: Lessons from FARC-EP’s Armed Struggle in Colombia.
Débora de Castro Leal, Max Krueger, Kaoru Misaki, David Randall, Volker Wulf
Show Abstract
Studying armed political struggles from a CSCW perspective can throw the complex interactions between culture, technology, materiality and political conflict into sharp relief. Such studies highlight interrelations that otherwise remain under-remarked upon, despite their severe consequences. The present paper provides an account of the armed struggle of one of the Colombian guerrillas, FARC-EP, with the Colombian army. We document how radio-based communication became a crucial, but ambiguous infrastructure of war. The sudden introduction of localization technologies by the Colombian army presented a lethal threat to the guerrilla group. Our interviewees report a severe learning process to diminish this new risk, relying on a combination of informed beliefs and significant technical understanding. We end with a discussion of the role of HCI in considerations of ICT use in armed conflicts and introduce the concept of counter-appropriation as process of adapting one’s practices to other’s appropriation of technology in conflict.
Understanding and Mitigating Worker Biases in the Crowdsourced Collection of Subjective Judgments.
Christoph Hube, Besnik Fetahu, Ujwal Gadiraju
Show Abstract
Crowdsourced data acquired from tasks that comprise a subjective component (e.g. opinion detection, sentiment analysis) is potentially affected by the inherent bias of crowd workers who contribute to the tasks. This can lead to biased and noisy ground-truth data,
propagating the undesirable bias and noise when used in turn to train
machine learning models or evaluate systems. In this work, we aim to
understand the influence of workers’ own opinions on their performance
in the subjective task of bias detection. We analyze the influence of
workers’ opinions on their annotations corresponding to different
topics. Our findings reveal that workers with strong opinions tend to
produce biased annotations. We show that such bias can be mitigated to
improve the overall quality of the data collected. Experienced crowd
workers also fail to distance themselves from their own opinions to
provide unbiased annotations.
Vistribute: Distributing Interactive Visualizations in Dynamic Multi-Device Setups.
Tom Horak, Andreas Mathisen, Clemens N. Klokmose, Raimund Dachselt, Niklas Elmqvist
Show Abstract
We present Vistribute, a framework for the automatic distribution of visualizations and UI components across multiple heterogeneous devices. Our framework consists of three parts: (i) a design space considering properties and relationships of interactive visualizations, devices, and user preferences in multi-display environments; (ii) specific heuristics incorporating these dimensions for guiding the distribution for a given interface and device ensemble; and (iii) a web-based implementation instantiating these heuristics to automatically generate a distribution as well as providing interaction mechanisms for user-defined adaptations. In contrast to existing UI distribution systems, we are able to infer all required information by analyzing the visualizations and devices without relying on additional input provided by users or programmers. In a qualitative study, we let experts create their own distributions and rate both other manual distributions and our automatic ones. We found that all distributions provided comparable quality, hence validating our framework.
Let Me Explain: Impact of Personal and Impersonal Explanations on Trust in Recommender Systems.
Johannes Kunkel, Tim Donkers, Lisa Michael, Catalin-Mihai Barbu, Jürgen Ziegler
Show Abstract
Trust in a Recommender System (RS) is crucial for its overall success. However, it remains underexplored whether users trust personal recommendation sources (i.e. other humans) more than impersonal sources (i.e. conventional RS), and, if they do, whether the perceived quality of explanation provi- ded account for the difference. We conducted an empirical study in which we compared these two sources of recom- mendations and explanations. Human advisors were asked to explain movies they recommended in short texts while the RS created explanations based on item similarity. Our experiment comprised two rounds of recommending. Over both rounds the quality of explanations provided by users was assessed higher than the quality of the system’s explana- tions. Moreover, explanation quality significantly influenced perceived recommendation quality as well as trust in the recommendation source. Consequently, we suggest that RS should provide richer explanations in order to increase their perceived recommendation quality and trustworthiness.