Amir-massoud Farahmand

 

Machine Learning Researcher
Mitsubishi Electric Research Laboratories (MERL)


Background

PhD in Computer Science, University of Alberta (CS)
(Working with Csaba Szepesvári and Martin Jägersand), 2011

NSERC Postdoctoral Fellow, McGill University (SCS)
(Working with Doina Precup), 2011-2014

NSERC Postdoctoral Fellow, Carnegie Mellon University (RI)
(Working with J. Andrew Bagnell), 2014

Research Goal

Very Short – Two perspectives:

  1. BulletUse data not only to predict, but also to control [ML Perspective].

  2. BulletDesigning adaptive situated agent [AI Perspective].


Longer:

In the 21st century, we live in a world where data is abundant. We would like to take advantage of this opportunity to make more accurate and data-driven decisions in many areas of life such as industry, healthcare, business, and government. Even though many machine learning and data mining researchers have developed tools to benefit from “big data”, their methods so far have mostly been about the task of prediction.

My goal, however, is to use data to control, that is, taking actions in an uncertain world with a complex dynamics in order to achieve a long-term goal such as maximizing the relief of a patient with a chronic disease, sustainable management of natural resources, or increasing the comfort of a building’s occupants.

Admittedly, we are not there yet. Theoretical foundations should be laid and technologies must be developed. But I believe that data-driven decision making defines a new era in human civilization, and my research moves us toward that era. For more details about data-driven control and decision making and understanding my contributions, refer to my Research Statement or take a look at my Publications. Also if you have any questions, please feel free to contact me.

News

  1. BulletLearning-based Modular Indirect Adaptive Control for a Class of Nonlinear Systems is accepted at American Control Conference (ACC) 2016. Joint work with Mouhacine Benosman and Meng Xia. Camera ready version coming soon. Meanwhile, look at the arXiv version. Summary: A stable robust controller that adapts using GP-UCB.

  2. BulletTruncated Approximate Dynamic Programming with Task-Dependent Terminal Value is accepted and presented at AAAI 2016 (PDF). Joint work with Daniel Nikovski, Yuji Igarashi, and Hiroki Konaka. Summary: Plan for much shorter horizon by learning the terminal value from solving similar problems.

  3. BulletDrew and I had a paper on Learning Positive Functions in a Hilbert Space at the NIPS workshop on Optimization for Machine Learning, 2015 (PDF). If you wish, take a look at the poster. Summary: Ensure the estimate is positive by formulating it as a Sum-of-Squares in an RKHS.

  4. BulletClassification-based Approximate Policy Iteration is published at the IEEE Transactions on Automatic Control, 2015 November (preprint; IEEE version). An extended version, which includes experiments and more discussions, is also available. Joint work with Doina, André, and Mohammad.

  5. BulletMy reviewing service was recognized by the International Conference on Machine Learning (ICML) 2015 Reviewer Award (52 out of 786). Thank you!

  6. BulletI joined the Data Analytics group of Mitsubishi Electric Research Laboratories (MERL) as a member of research staff (2014 December).

  7. BulletApproximate MaxEnt Inverse Optimal Control and its Application for Mental Simulation of Human Interactions is accepted at AAAI Conference on Artificial Intelligence (AAAI), 2015. Joint work with De-An, Kris, and Drew. Extended version with proofs is here. A shorter extended abstract version is presented at the Reinforcement Learning and Decision Making (RLDM), 2015, in case you prefer a short summary.

Persian blog: ضدخاطرات

Academic blog: ThesiLog (inactive!)

Academic and Non-academic Tweets

Research Interests

  1. BulletMachine Learning and Statistics: statistical learning theory, nonparametric algorithms, regularization, manifold learning, non-i.i.d. processes, online learning

  2. BulletReinforcement Learning, Sequential Decision Making, and Control: high-dimensional problems, regularized nonparametric algorithms, inverse optimal control

  3. BulletRobotics: uncalibrated visual servoing, learning from demonstration, behaviour-based architecture for robot control

  4. BulletIndustrial Applications: hybrid vehicle energy management

  5. BulletEvolutionary Computation: cooperative co-evolution, interaction of evolution and learning

  6. BulletLarge-scale Optimization