MVA and MscAI masters, and CentraleSupelec cursus, January-March 2026
News : (reload for fresher news...)
- First course: 9h45 in théâtre Rousseau e.070, Bouygues building, Thursday 22nd of January
- 2026 schedule updated
- Registration for the 2026 course: click here! The previous course (January-March 2025) can be found there; the 2026 course will follow more or less the same lines, though not exactly.
To attend the 2026 course (that will start in January 2026), please register here first.
NB: Auditors (auditeurs libres) are welcome; just subscribe as well (no need to ask me by email).
To just access the recordings of previous courses (without attending the course), register here instead.
Requirements : having already followed a course about neural networks (this is an advanced deep learning course).
Typical mathematical notions used: differential calculus, Bayesian statistics, analysis, information theory.
Teaching team :
Most lectures: Guillaume Charpiat
Practical sessions / project supervision: Félix Houdouin, Théo Rudkiewicz, Jules Soria and Luca Teodorescu (incl. materials by numerous previous great teachers: Victor Berger, Alessandro Bucci, Styliani Douka, Loris Felardos, Rémy Hosseinkhan, Wenzhuo Liu, Matthieu Nastorg, Francesco Pezzicoli, Cyriaque Rousselot and Antoine Szatkownik)
Course validation : we will let the choice between 2 possibilities :
either by practicals:
5 practicals (notebooks), presented in class, to do at home in groups of 1-3 people, and to hand in within 2 weeks
1 final exam (on paper, in class, alone)
either by project:
project including several concepts from the course, to do in groups of 1-3 people, with small report to write + defense
1 final exam (on paper, in class, alone)
Schedule :Note that the schedules are irregular and that locations vary.
Sessions take place on Thursdays, 9h45 - 13h (3 hours course + a 15 minute break), at various places at CentraleSupelec:
Session 1 : Thursday January 22nd (théâtre Rousseau, Bouygues building -- known as room e.070)
Session 2 : Interpretability: visualization and analysis →[2023]Course notes (pdf) (handwritten with drawings) and lesson summary (html) →[2023] Video recording: part 1 [500MB], part 2 [700MB]
Session 3 : Architectures →[2023]Course notes (pdf) part 1 + part 2 (handwritten with drawings) →[2025]lesson summary →[2023] Video recording: part 1 [500MB] (theory: prior, initialization, ...) and part 2 [500MB] (architecture zoo, attention, graph-NN)
Session 4 : Issues with datasets (biases, privacy...) →[2023]Course notes (pdf) (handwritten with drawings) and lesson summary (html) →[2023] Video recording: part 3 [700MB], part 4 [400MB]
Session 5 : Small data and frugal AI: weak supervision, transfer, compression and incorporation of priors →[2023]Course notes (pdf) (handwritten with drawings) and lesson summary →[2023] Video recording: part 1 [500MB] and part 2 [500MB]
Guaranteeing the absence of privacy leakage of generators trained on sensitive data, using an Extreme Value Theory framework on distances to closest neighbors, with Cyril Furtlehner (see arxiv 2510.24233)
Links between explainability, frugal AI, robustness and formal proofs of neural networks: looking for statistically-meaningful concepts and enhancing them. In collaboration with PEPR IA SAIF.
and lots of other very interesting topics, on demand (deep learning to speed up fluid mechanics simulations, for dynamical systems, for physics; learning causality, ...)