
Katherine Chen













My work began in cinema, grounded in the aesthetics of auteur theory and the French New Wave. I was driven by the idea of film as a personal essay and a witness to truth, a belief I practiced while directing documentaries and narrative films that went on to receive international recognition. This path led me to a new question: what happens when the world on screen becomes a sensing, responsive, and computationally malleable environment? Today, I am a PhD researcher at Arizona State University, working at the intersection of computer science, human-computer interaction (HCI), and generative media art.
My core research investigates a central tension: as real-time sensing (like Kinect, MediaPipe, or Arduino) and generative synthesis merge, algorithmic systems often obscure, rather than support, human authorship. My work aims to solve this. I build human-centered, real-time interactive systems with a focus on embodied interaction, ambient sensing, and virtual production. The goal is to design frameworks where these complex systems remain legible to an audience and intentionally shapeable by an artist.
I use a Research-Through-Design (RtD) methodology, where art-making itself is the form of inquiry. Through a sequence of interactive installations and audiovisual performances , I study how different sensors reframe gesture semantics , how ambient data can act as a "legible co-author" , and how biofeedback can become a first-person medium for expression. I build these systems using a diverse technical stack, including TouchDesigner, Unity, Python, C++, GLSL, and hardware like Arduino and Raspberry Pi.
