Quentin BERTRAND - Home
Since July 1st 2024, I am a junior tenured researcher (‘chargé de recherche‘) at Inria Lyon and Université Jean Monnet, in the Malice team, located in Laboratoire Hubert Curien. I teach at ENS Lyon. I currently work on optimization and generative models.
From November 2021 to June 2024, I was a post-doctoral researcher at Mila working with Gauthier Gidel and Simon Lacoste-Julien. Prior to this position, I did my Ph. D. at Inria Paris-Saclay (in the Parietal Team) under the supervision of Joseph Salmon and Alexandre Gramfort. I worked on the optimization and statistical aspects of high dimensional brain signal reconstruction.
In particular,
We just released a friendly blog post on normalizing flows and conditional flow matching techniques!
Our recent works on self-consuming generative models and their biases got media coverage from the N.Y. times!
We released a Python package for large scale optimization of sparse problems (4k download/month).
Here is a short resume and my list of publications.
Contact
Email: quentin [dot] bertrand AT inria [dot] fr
News
05-02-2025 Our paper Q-learners Can Provably Collude in the Iterated Prisoner's Dilemma was just accepted to ICML, see you in Vancouver!
04-08-2025 We just gave a 12h tutorial on deep generative models at the Senegalese Computer Science Society and AI Hub Sénégal Summer School. The material can be found here. Thanks to Inria and the French Embassy for making this possible: we had amazing interactions in Dakar!
03-20-2025 Our Inria-Mila associated team was just created. See you soon in Montréal!
02-01-2025 Our friendly blog post on normalizing flows and conditional flow matching techniques was accepted at the ICLR 2025 Blog Post Track.
01-15-2025 I just gave a talk on how to retrain on synthetic data at the Mathematic Image and Applications conference! Here are the slides
12-09-2024 I was at NeurIPS to present our paper Self-Consuming Generative Models with Curated Data Provably Optimize Human Preferences
11-26-2024 We just released a blog post on normalizing flows and conditional flow matching techniques!
09-29-2024 I was delighted to be a keynote speaker for the ECCV workshop ¨The Dark Side of Generative AIs and Beyond¨, here are the slides
09-26-2024 Our paper showing that Self-Consuming Generative Models with Curated Data Provably Optimize Human Preferences was just accepted to NeurIPS! See you in Vancouver!
08-26-2024 Our recent works on self-consuming generative models and their biases was featured in the N.Y. times!
07-01-2024 Just started as an Inria researcher in the Malice team
04-22-2024 The recording of the talk On the Stability of Iterative Retraining of Generative Models on their own Data at the Montreal Machine Learning Seminar can be found here
01-16-2024 Our paper On the Stability of Iterative Retraining of Generative Models on their own Data was accepted to ICLR 2024 with a spotlight, see you in Vienna!
12-18-2023 We just released our paper proving that Q-learners can provably learn to collude in the iterated prisoner dilemma!
01-07-2023 On July 1st, 2024 I will join Inria as a Research Scientist!
Previous News
05-12-2023 I will present our paper On the Limitations of Elo: Real-World Games are Transitive, not Additive at the Berkeley Multi-Agent Reinforcement Learning Seminar
Our paper Synergies between Disentanglement and Sparsity: Generalization and Identifiability in Multi-Task Learning has been accepted at ICML 2023, see you in Hawaii!
Our paper On the Limitations of Elo: Real-World Games are Transitive, not Additive has been accepted to AISTATS 2023, see you in Spain!
I just presented our paper Synergies between Disentanglement and Sparsity: a Multi-task Learning Perspective at the Canadian Mathematical Society Winter Workshop
I just presented our two papers Beyond L1: Faster and Better Sparse Models with skglm and The Curse of Unrolling: Rate of Differentiating Through Optimization at NeurIPS 2022
I was awarded the top reviewer award at NeurIPS 2022!
Our papers Beyond L1: Faster and Better Sparse Models with skglm and The Curse of Unrolling: Rate of Differentiating Through Optimization have been accepted to NeurIPS 2022!