Read the original workshop proposal

Safety is a frequently discussed and increasingly important topic in Human-Computer Interaction (HCI) and one historically well-represented at CSCW. However, CSCW is a diverse community that engages multiple fields and perspectives; therefore, the community employs multiple, sometimes contradictory, definitions of technologically-mediated “safety” both at the conference and in the field overall. While different concepts of safety may have clear meaning within a computing subfield, the overall importance of safety as a topic to HCI and the need to communicate clearly across subfields and to other disciplines underscores the importance of exploring how differing approaches to defining and operationalizing safety compare and potentially build on/communicate with each other. Moreover, by comparing, contrasting, and synthesizing CSCW’s multiple approaches to safety, we can better understand and account for safety as a whole. A core goal of this workshop is to make progress towards a collective understanding of safety. Without clarity on such an important concept, there will be wasted efforts in intervention design for mitigating harms for impacted users and communities, and we squander the potential for effective, academic collaboration. In this workshop, we take the first step towards initiating a conversation between different HCI perspectives on safety and mapping out the multitude of different goals, aspirations, and working definitions of what academic research on and about safety looks like.

Multiple approaches to safety exist at CSCW, focused on various dimensions of physical, psychological, and sociotechnical aspects of safety as it intersects with computing technologies and approaches to enact it. For example, HCI work on digital safety examines online behaviors that may result in a wide range of interpersonal harms, including forms tech-facilitated violence (e.g., cyberbullying, gender violence, racism and misogynoir}, exposure to dangerous content (e.g., harmful social media challenges, content encouraging eating disorders), and how the design of platforms and the logics baked in create conditions of unsafety that disproportionately affect historically marginalized communities, including transgender and disabled communities. This literature also focuses on ways to foster “safer” digital experiences through community-centered practices and design interventions focused on recognition of interdependence and care. Some of this work aligns with how tech companies think of “Trust & Safety” organizations, including topics like child safety, misinformation, illegal content, violence and extremism, and more. In parallel, there is ample work in HCI that deals with topics related to safety but themselves may not self-describe as safety, such as mental health crisis and self-injurious behaviors.

Another key thread of HCI research focuses on the perceptual side of safety, such as the sensemaking and folk theorization that actually drives user decision making. This emerging body of literature examines safety from an affective point-of-view, offering important insights into the user side of safety questions. Nevertheless, this approach to safety often fails to engage with technical realities of how digital technologies are developed, which are shaped by the complex interplay between users, platforms, and governments/regulatory bodies. For example, while folk theorization and community-based approaches provide crucial context on user experiences and needs, they sometimes capture broad user anger, irritation, and frustration as opposed to contextual experiences of harm when interacting with novel and/or unsafe-seeming systems. This may look like users and non-users calling for platforms to “do better” to implement safer experiences, which may be out of touch with the practical limits of how platforms function, or how optimizing for one safety concern may lead to other problems in disciplining technologies to developer prescribed safety and content moderation policies.

Finally, an entirely different set of definitions and approaches exist related to machine learning, responsible AI (artificial intelligence) development, and human-AI interaction within HCI. ML systems are increasingly embedded in our digital ecosystems (e.g., social media platforms, recommendation systems, apps, stand alone text and image generation tools) necessitating novel research on what safety means when these systems mediate interpersonal and social interactions. Against the backdrop of regulatory calls for “safe AI”, HCI research in this space focuses on identifying and reducing potential harmful social impacts of AI technologies for direct users and broader communities, including outlining unsafe design patterns and how differently situated users disproportionately experience harm and safety in AI technologies.

Much HCI work in this space also focuses on studying developer and practitioner culture and improving practices to responsibly develop “safe” AI systems. This includes research that draws on the longstanding field of safety engineering exploring how it provides an avenue for systematizing the identification and mitigation of social and ethical risks. While this body of work enriches understanding of sociotechnical approaches to establishing a safety culture and examining harms from AI system, research on human-AI interaction would benefit from deeper engagement with equity-oriented HCI approaches, including engaging historically marginalized communities to understand how technology mediates experiences of safety. This deeper engagement is critical in light of the growing conversation on AI safety as it emphasizes and illustrates that unsafe experiences/events emerge from complex interactions between a technology and corresponding social structures shaped by axes of inequality.

Additionally, HCI contains a wealth of knowledge in areas that relate to and impact digital safety. For example, the HRI subfield and work on assistive and medical technology provide insight into physical safety issues, while ergonomics-related and human factors research provides insight into organizational and procedural safety. While the primary focus of this workshop is digital safety, we welcome participation from other areas of expertise within CSCW.