Research

Research Goal

Understanding the computational and statistical mechanisms required to design efficient AI agents that interact with their environment and adaptively improve their long-term performance.

It is useful to expand on what I expect from this agent. This is the guiding map for much of my research activities:

Desiderata: This social agent should be able to control its stream of experience, by learning how the outside and inside worlds work while focusing on aspects that are most relevant to its decision making. It should sample-efficiently solve problems that (1) have large state and action spaces, (2) require decisions to be made at varying temporal granularities, and (3) require risk-awareness.

My research team, Adaptive Agents (Adage) Lab, approaches this goal often through the lens of reinforcement learning (RL). We use a diverse set of research methodologies, ranging from theoretical/mathematical analysis to empirical studies to solving novel and challenging applications.

Here I provide a brief summary of some of my research projects, divided to two broad categories of Theoretical Push and Application Pull, with pointers to some relevant publications. This is not a comprehensive list. See my research statement, Rethinking Reinforcement Learning (2024), for more detailed explanation and Publications for an almost complete list of papers.

Theoretical Push

Application Pull