Haichuan Wang

I am a second-year CS PhD student at Harvard University, where I am fortunate to be advised by Professor Milind Tambe. My research lies at the intersection of generative AI, game theory, and reinforcement learning. Specifically, my interests are:

  1. Applying tools from machine intelligence—namely, probabilistic models—to understand human behavior.
  2. Applying tools traditionally used to study human intelligence—namely, game theory—to understand machine behavior.

In slightly fancier terms, the unifying theme of my research is to deconstruct the binary between humans and machines. I first became interested in this question while studying media aesthetics in college, and I am glad to see it eventually comes back to me.


(1) Modeling Human Behavior with Generative Models

Traditional game theory assumes that agents act rationally or follow bounded rationality models. However, such assumptions greatly simplify the high-dimensional and complex nature of human decision-making, leading to limited predictive power in practice. In many settings, human strategies are naturally represented as distributions. With the advent of modern probabilistic models such as diffusion models, we can instead learn fully data-driven representations of human behavior. A natural follow-up question is: what can we build on top of this new paradigm of behavioral modeling?

Exemplary Projects

  • Learning human behavior under data scarcity and observational bias (IAAI 2025)
    Generative AI Against Poaching: Latent Composite Flow Matching for Wildlife Conservation

  • Playing against learned adversary strategies via robust optimization (UAI 2025)
    Robust Optimization with Diffusion Models for Green Security

  • Transferring strategic knowledge across similar players (NeurIPS 2025)
    Composite Flow Matching for RL with Shifted-Dynamics Data


(2) Understanding AI Systems through Game Theory

Post-trained AI systems are goal-oriented, and in this sense can be understood as rational decision-makers. This perspective unlocks tools from game theory for understanding complex AI systems. I hope this perspective can offer new insights into questions that are central to the design and alignment of next-generation AI systems.

Exemplary Project

  • Alignment (In submission)
    The reward model in alignment is essentially an incentive mechanism. What is the optimal such mechanism? —

Previously, I graduated Summa Cum Laude from the University of Chicago, where I obtained a triple major in Computer Science (Honors), Economics (Honors), and Mathematics. During my undergraduate years, I was fortunate to work with Professor Haifeng Xu on game theory, and with Professor Anthony Zhang and Dr. Gordon Liao on Decentralized Finance.

Last updated: Jan 2026

CV

You can access my CV in the following link: CV