Graduate Student Positions in Reinforcement Learning at PolyMtl and Mila (Deadline: Dec 1, 2024)

I am excited to announce 1-2 graduate students positions for the 2025-2026 academic year at the Department of Computer and Software Engineering of the Polytechnique Montréal and Mila – Quebec Artificial Intelligence Institute.

These positions will focus on the algorithmic and theoretical aspects of Reinforcement Learning. The  admitted students will become members of my Adaptive Agents (Adage) Lab as well as Mila and PolyMtl.

To start your application, please fill the Supervision Request of Mila. The deadline is December 1, 2024.

Why Join Us?

Fundamental Research: My research goal is to understand the computational and statistical mechanisms required to design efficient RL agents that interact with their environment and adaptively improve their long-term performance.
For more information, explore:

Adage (Adaptive Agents Lab): The Adage Lab is a diverse, multicultural, and welcoming team of talented and hard-working individuals committed to advancing the foundation of AI. As an example of our diversity, the group of ~25 students who have ever been a member of my team come from 13 different countries of origin. For more information about my teaching philosophy and view on EDI, see:

Top-Tier Research Environment: Be part of Mila, a global leader in AI research, with over 150 faculty members and 1,100 student-researchers. Mila offers a bustling environment with weekly talks, reading groups, and events.

Montreal: Study in one of the Canada’s most vibrant cities, known for its multicultural community, year-round festivals, and dynamic art scene. Montreal also offers affordable living and a welcoming environment for students.

Next Steps

I look forward to welcoming motivated students passionate about advancing the foundation of reinforcement learning!

P.S: You should know that there are many great faculty members affiliated with Mila, working on various aspects of AI/ML. Make sure to check them out! For the list of professors with an active RL research at the global level, with a focus on North America, take a look at this spreadsheet (maintained by Philip Thomas).

Value Function in Frequency Domain and the Characteristic Value Iteration Algorithm

Paper: Short version; Extended version
Poster

Informal Summary

Brief: We can develop a distributional RL framework through the use of characteristic function. It is an alternative to representing the uncertainty of returns by probability distribution functions.

Longer: A conventional RL agent maintains the expected value of returns, i.e., long-term reward. But sometimes we would like to know more than the expected value of returns, for example when we want to deal with risk.
How can we represent something more than the expected return? One approach is to represent the probability distribution function of return. This has already been explored.
I show that you can represent the uncertainty of return using its characteristic function, which is the Fourier transform of its probability distribution. I call this Characteristic Value Function (CVF). CVF satisfies a Bellman equation, which is multiplicative instead of additive. And the Bellman operator is a contraction, and one can obtain CVF by an iterative method similar to Value Iteration, which is called Characteristic Value Iteration.

Abstract

This paper considers the problem of estimating the distribution of returns in reinforcement learning (i.e., distributional RL problem). It presents a new representational framework to maintain the uncertainty of returns and provides mathematical tools to compute it.
We show that instead of representing a probability distribution function of returns, one can represent their characteristic function instead, the Fourier transform of their distribution. We call the new representation Characteristic Value Function (CVF), which can be interpreted as the frequency domain representation of the probability distribution of returns.
We show that the CVF satisfies a Bellman-like equation, and its corresponding Bellman operator is contraction with respect to certain metrics. The contraction property allows us to devise an iterative procedure to compute the CVF, which we call Characteristic Value Iteration (CVI). We analyze CVI and its approximate variant and show how approximation errors affect the quality of computed CVF.