Seven papers by CSE researchers presented at CHI 2023

30 University of Michigan researchers authored and co-authored papers spanning surveillance, virtual reality, algorithmic stigma, assistive technology, and sensing systems.

Researchers at U-M CSE have been accepted to present seven papers at the 2023 Conference on Human Factors in Computing Systems, the top conference in the field of human-computer interaction. 30 students and faculty described their projects spanning several key application areas, including assistive technology, interactive sensing, privacy and surveillance, and education.

Learn more about the papers (all University of Michigan authors listed in bold):

Conceptualizing Algorithmic Stigmatization

Nazanin Andalibi, Cassidy Pyle, Kristen Barta, Lu Xian, Abigail Z Jacobs, Mark S. Ackerman

Abstract: Algorithmic systems have infiltrated many aspects of our society, mundane to high-stakes, and can lead to algorithmic harms known as representational and allocative. In this paper, we consider what stigma theory illuminates about mechanisms leading to algorithmic harms in algorithmic assemblages. We apply the four stigma elements (i.e., labeling, stereotyping, separation, status loss/discrimination) outlined in sociological stigma theories to algorithmic assemblages in two contexts : 1) “risk prediction” algorithms in higher education, and 2) suicidal expression and ideation detection on social media. We contribute the novel theoretical conceptualization of algorithmic stigmatization as a sociotechnical mechanism that leads to a unique kind of algorithmic harm: algorithmic stigma. Theorizing algorithmic stigmatization aids in identifying theoretically-driven points of intervention to mitigate and/or repair algorithmic stigma. While prior theorizations reveal how stigma governs socially and spatially, this work illustrates how stigma governs sociotechnically.

Hacking, Switching, Combining: Understanding and Supporting DIY Assistive Technology Design by Blind People

Jaylin Herskovitz, Andi Xu, Rahaf Alharbi, Anhong Guo

Abstract: Existing assistive technologies (AT) often fail to support the unique needs of blind and visually impaired (BVI) people. Thus, BVI people have become domain experts in customizing and `hacking’ AT, creatively suiting their needs. We aim to understand this behavior in depth, and how BVI people envision creating future DIY personalized AT. We conducted a multi-part qualitative study with 12 blind participants: an interview on unique uses of AT, a two-week diary study to log use cases, and a scenario-based design session to imagine creating future technologies. We found that participants work to design new AT both implicitly through creative use cases, and explicitly through regular ideation and development. Participants envisioned creating a variety of new technologies, and we summarize expected benefits and concerns of using a DIY technology approach. From our results, we present design considerations for future DIY technology systems to support existing customization and `hacking’ behaviors.

Less is Not More: Improving Findability and Actionability of Privacy Controls for Online Behavioral Advertising

Jane Im, Ruiyi Wang, Weikun Lyu, Nick Cook, Hana Habib, Lorrie Faith Cranor, Nikola Banovic, Florian Schaub

Abstract: Tech companies that rely on ads for business argue that users have control over their data via ad privacy settings. However, these ad settings are often hidden. This work aims to inform the design of findable ad controls and study their impact on users’ behavior and sentiment. We iteratively designed ad control interfaces that varied in the setting’s (1) entry point (within ads, at the feed’s top) and (2) level of actionability, with high actionability directly surfacing links to specific advertisement settings, and low actionability pointing to general settings pages (which is reminiscent of companies’ current approach to ad controls). We built a Chrome extension that augments Facebook with our experimental ad control interfaces and conducted a between-subjects online experiment with 110 participants. Results showed that entry points within ads or at the feed’s top, and high actionability interfaces, both increased Facebook ad settings’ findability and discoverability, as well as participants’ perceived usability of them. High actionability also reduced users’ effort in finding ad settings. Participants perceived high and low actionability as equally usable, which shows it is possible to design more actionable ad controls without overwhelming users. We conclude by emphasizing the importance of regulation to provide specific and research-informed requirements to companies on how to design usable ad controls.

ReadingQuizMaker: A Human-NLP Collaborative System to Support Instructors Design High Quality Reading Quiz Questions

Xinyi Lu, Simin Fan, Jessica Houghton, Lu Wang, Xu Wang

Best Paper Honorable Mention

Abstract: Despite that reading assignments are prevalent, methods to encourage students to actively read are limited. We propose a system ReadingQuizMaker that supports instructors to conveniently design high-quality questions to help students comprehend readings. ReadingQuizMaker adapts to instructors’ natural workflows of creating questions, while providing NLP-based process-oriented support. ReadingQuizMaker enables instructors to decide when and which NLP models to use, select the input to the models, and edit the outcomes. In an evaluation study, instructors found the resulting questions to be comparable to their previously designed quizzes. Instructors praised ReadingQuizMaker for its ease of use, and considered the NLP suggestions to be satisfying and helpful. We compared ReadingQuizMaker with a control condition where instructors were given automatically generated questions to edit. Instructors showed a strong preference for the human-AI teaming approach provided by ReadingQuizMaker. Our findings suggest the importance of giving users control and showing an immediate preview of AI outcomes when providing AI support.

SAWSense: Using Surface Acoustic Waves for Surface-bound Event Recognition

Yasha Iravantchi, Yi Zhao, Kenrick Kin, Alanson P. Sample

Best Paper

Abstract: Enabling computing systems to understand user interactions with everyday surfaces and objects can drive a wide range of applications. However, existing vibration-based sensors (e.g., accelerometers) lack the sensitivity to detect light touch gestures or the bandwidth to recognize activity containing high-frequency components. Conversely, microphones are highly susceptible to environmental noise, degrading performance. Each time an object impacts a surface, Surface Acoustic Waves (SAWs) are generated that propagate along the air-to-surface boundary. This work repurposes a Voice PickUp Unit (VPU) to capture SAWs on surfaces (including smooth surfaces, odd geometries, and fabrics) over long distances and in noisy environments. Our custom-designed signal acquisition, processing, and machine learning pipeline demonstrates utility in both interactive and activity recognition applications, such as classifying trackpad-style gestures on a desk and recognizing 16 cooking-related activities, all with >97% accuracy. Ultimately, SAWs offer a unique signal that can enable robust recognition of user touch and on-surface events.

Shifting from Surveillance-as-Safety to Safety-through-Noticing: A Photovoice Study with Eastside Detroit Residents

Alex Jiahong Lu, Shruti Sannon, Cameron Moy, Savana Brewer, Jaye Green, Kisha N Jackson, Daivon Reeder, Camaria Wafer, Mark S. Ackerman, Tawanna R Dillahunt

Abstract: Safety has been used to justify the expansion of today’s large-scale surveillance infrastructures in American cities. Our work offers empirical and theoretical groundings on why and how the safety-surveillance conflation that reproduces harm toward communities of color must be denaturalized. In a photovoice study conducted in collaboration with a Detroit community organization and a university team, we invited eleven Black mid-aged and senior Detroiters to use photography to capture their lived experiences of navigating personal and community safety. Their photographic narratives unveil acts of “everyday noticing” in negotiating and maintaining their intricate and interdependent relations with human, non-human animals, plants, spaces, and material things, through which a multiplicity of meaning and senses of safety are produced and achieved. Everyday noticing, as simultaneously a survival skill and a more-than-human care act, is situated in residents’ lived materialities, while also serving as a site for critiquing the reductive and exclusionary vision embedded in large-scale surveillance infrastructures. By proposing an epistemological shift from surveillance-as-safety to safety-through-noticing, we invite future HCI work to attend to the fluid and relational forms of safety that emerge from local entanglement and sensibilities.

VRGit: A Version Control System for Collaborative Content Creation in Virtual Reality

Lei Zhang, Ashutosh Agrawal, Steve Oney, Anhong Guo

Abstract: Immersive authoring tools allow users to intuitively create and manipulate 3D scenes while immersed in Virtual Reality (VR). Collaboratively designing these scenes is a creative process that involves numerous edits, explorations of design alternatives, and frequent communication with collaborators. Version Control Systems (VCSs) help users achieve this by keeping track of the version history and creating a shared hub for communication. However, most VCSs are unsuitable for managing the version history of VR content because their underlying line differencing mechanism is designed for text and lacks the semantic information of 3D content; and the widely adopted commit model is designed for asynchronous collaboration rather than real-time awareness and communication in VR. We introduce VRGit, a new collaborative VCS that visualizes version history as a directed graph composed of 3D miniatures, and enables users to easily navigate versions, create branches, as well as preview and reuse versions directly in VR. Beyond individual uses, VRGit also facilitates synchronous collaboration in VR by providing awareness of users’ activities and version history through portals and shared history visualizations. In a lab study with 14 participants (seven groups), we demonstrate that VRGit enables users to easily manage version history both individually and collaboratively in VR.