| Chapter | Title | Summary | Notes | Videos | Exercises |
|---|---|---|---|---|---|
| 1 | Deep learning vs. classical ML and optimization | html | 1 | Hyperparameter and training basics Environment and practical tips and tricks |
|
| 2 | Interpretability | html | 1, 2, 3, 4, 5 | Visualization with grad-CAM: notebook and test images | |
| 3 | Architectures | html | 1, 2 | Graph-NN: instructions, code and baseline | |
| 4 | Small data: weak supervision, transfer, and incorporation of priors | html | 1, 2 | Transfer learning: jupyter notebook | |
| 5 | Deep learning and physics (invariances, priors or physical properties) with Michele Alessandro Bucci (Safran) | html + refs | 1, 2 | Learning dynamical systems | |
| 6 | Generative models + uncertainty quantification by Maxence Ernoult (IBM Research) | - | 1, 2 | Generative models |
|
| 7 | Modeling tasks and losses + Generalization and guarantees + Auto-ML / Auto-DeepLearning by Adrian El Baz, from MILA | html 1 + 2 | 1, 2 |