AI Research & Policy
I am a first year PhD student in Electrical Engineering at Stanford University, where I am fortunate to be advised by James Zou and Mykel Kochenderfer, and am supported by the Knight-Hennessy Scholarship. I am concurrently a Research Scientist at Google DeepMind on the Agentic AI Privacy and Security team, where I work on improving Gemini's robustness to prompt injection attacks and study data memorization in large language models.
I am excited by the transformative potential of AI and care about work that helps us i) actualize benefits to humanity from AI while ii) mitigating the risks that could prevent us from realizing those benefits. To that end, my research interests span capabilities research (I am currently focused on self-evolving and autonomously organizing multi-agent systems), cognitive security (risks of AI compromising and adversarially shaping human decisions), AI security (adversarial robustness) and AI safety. This tension also motivates my AI policy interests, where I previously spent time at the Ada Lovelace Institute on AI governance research, and am currently interested in the risks of AI persuasion to democratic security. I am fortunate to be supported by advisors who are encouraging of this mix of interests.
I completed my MS in Computer Science at Stanford University, and was fortunate to work with Stephen Boyd, Mykel Kochenderfer, and Emmanuel Candès during an internship at BlackRock AI Labs. I received my MSc in Machine Learning from University College London, advised by Brooks Paige, and MPhil in Public Policy from the University of Cambridge, advised by Tanya Filer, as a Marshall Scholar. I completed my BS in Symbolic Systems at Stanford.
I also enjoy teaching. I had the opportunity to TA my favorite course, EE364A (Convex Optimization), under Stephen Boyd and was honored to receive the Stanford Centennial Teaching Assistant Award for my contributions.