Examining how technology design shapes who benefits, who is harmed, and who gets to decide.
Technologies that mediate how people see information, represent themselves, and participate in social and civic life are never neutral. Design choices about what data to collect, how to present it, who controls access, and whose needs drive development all carry consequences — for individuals, and for the groups and communities they belong to. Across several projects, my collaborators and I have examined these consequences empirically and developed frameworks to help researchers and designers engage with them more deliberately.
Image search results for occupational terms systematically underrepresent women and reproduce gender stereotypes — returning images of men for most professional roles, and surfacing Barbie as the first female result for "CEO." Working with Matthew Kay and Cynthia Matuszek, we documented this pattern at scale, contributing an early empirical foundation for what has since become a large body of work on algorithmic fairness and representation. The paper has been widely cited in both HCI and machine learning fairness research and received extensive press coverage.
Smart home cameras are marketed primarily for security and peace of mind, but how people actually use them is more complicated. In a study of everyday camera use, Neilly Tan and collaborators found that the same devices used to monitor pets and check on packages were also used to passively observe neighbors — often without those neighbors' knowledge or consent. A companion speculative design study explored how people imagine and negotiate the ethical boundaries of camera use in the home. Together, this work surfaces how consumer technologies designed around individual use cases can produce collective harms, and raises questions about what design obligations follow from that.
Wearable tracking technologies have spread rapidly in collegiate athletics, where they are typically deployed by institutions — coaches, athletic departments, universities — rather than chosen by athletes. In a study of this practice, Sam Kolovson and collaborators found that data collected from athletes' bodies primarily serves institutional interests, with athletes having little control over how it is used or who sees it. A follow-up speculative design study examined how athletes, coaches, and administrators imagine preferable futures for sports tracking, revealing divergent visions that reflect underlying power asymmetries. This work contributes to broader conversations about consent and data governance when tracking is a condition of participation. It also connects with my research on personal informatics.
Many digital platforms employ design mechanisms — autoplay, infinite scroll, frictionless defaults — that maximize engagement at the expense of users' ability to make intentional choices about their time and attention. Working with Kai Lukoff, Alexis Hiniker, and collaborators, we examined how YouTube's design shapes users' sense of agency over their own viewing, finding that common platform features undermine the agency users report wanting. A follow-up project developed and evaluated SwitchTube, a proof-of-concept system introducing "adaptable commitment interfaces" that let users set their own constraints in advance — shifting the locus of control back toward the user.
The HCI community has long worked to help people collect and understand their health data — but the same data that supports self-management can also be weaponized. Following the 2022 overturn of Roe v. Wade, Georgia Kenderova and collaborators examined how people navigated the newly heightened risks of period-tracking and other health technology use. Most participants did not significantly change their tracking practices despite privacy concerns — a pattern explained by a complex risk calculus in which abstract legal threats competed with concrete benefits, switching costs, and the paradoxical role of tracking as both risk and protection. This work argues that designing for sensitive health data requires attending not just to privacy features, but to the legal and political contexts in which those features operate.
Questions of power and agency extend beyond users to the workers who build and sustain technology systems. In the aftermath of the 2022–23 mass tech layoffs, Sam So and collaborators conducted a longitudinal study with 29 laid-off tech workers, finding widespread alienation and unfulfillment — and a pattern of conflicted attachment to tech work that the authors characterize as “cruel optimism”: an affective bind in which workers remained oriented toward an industry that no longer offered what drew them to it, while feeling unable to imagine or access alternatives. This work connects individual workers’ experiences to broader questions about labor, resistance, and the sociotechnical imaginaries that shape how tech work is understood and valued.
HCI researchers increasingly work with communities whose members face structural disadvantage, and this creates recurring methodological and ethical tensions: between inclusion and burden, between participation and extractive research, between designing for a community and designing with one. Calvin Liang, Julie Kientz, and I synthesized findings across a body of such work to articulate four persistent tensions — and to argue that embracing rather than resolving these tensions is often the more honest and productive stance. This framework has been taken up by researchers working across a range of marginalized populations and design contexts.