Aaron J. Li
Berkeley, CA, 94720
I am a first-year CS PhD student at UC Berkeley advised by Prof. Bin Yu and Prof. Ion Stoica. I’m affiliated with Sky Computing Lab and BAIR. I completed my Master’s degree in Computational Science and Engineering at Harvard University, where I was fortunate to be advised by Prof. Hima Lakkaraju. Prior to that, I earned my Bachelor’s degree from UC Berkeley, double majoring in Computer Science and Psychology.
My research centers on LLM evaluation, alignment, and safety, with an emphasis on direct practical impact and domain-specific applications. I’m also interested in foundational topics about how LLMs work, including mechanistic interpretability and reasoning.
Here are several overarching goals I hope my research projects aim to achieve:
(1) Propose novel frameworks for evaluating LLM capabilities that move beyond traditional benchmark-style tasks. These frameworks should better reflect real-world use cases, have tangible practical impact, and provide insights that inform future efforts in model development and alignment.
(2) Develop LLM-powered agents and systems that are both robust and adaptable, capable of fundamentally understanding and responding to domain shifts.
(3) Systematically characterize and unify diverse human-defined LLM behaviors across different levels of granularity. Many of these behaviors overlap or connect, suggesting the possibility of a tree-like hierarchy. Ideally, these behaviors should be studied through a hierarchical framework that enables unified observation, evaluation, and interpretation, instead of treating them as disconnected properties.
I’m always open to collaborations and happy to discuss all kinds of research ideas. The best way to contact me is through email.