Machine Learning II course:

Information Theory & Reinforcement Learning



General information:
*** Bring your laptop! ***

(click on 'details' to show/hide summary, exercises, references, etc.)


Part I : Reinforcement Learning

Partial lecture notes for this part are available here.

Chapter 1 : Introduction, Bandits, and Combination of Experts for time series prediction
Chapter 2 : Learning dynamics (Bellman equation, Dynamic Programming, Monte Carlo, Temporal Difference(0), Q-learning, Sarsa)
Chapter 3 : Learning dynamics II (Eligibility traces, TD(lambda), generalization and function approximation, example with Atari player)
Chapter 4 : Learning dynamics III (policy gradient), Monte Carlo Tree Search (minimax trees, alpha-beta pruning, Upper Confidence Tree, applied to Go with CrazyStone/MoGo/AlphaGo)

Part II : Information Theory

Chapter 5 : Entropy
Chapter 6 : Compression/Prediction/Generation equivalence
Chapter 7 : Kolmogorov complexity
Chapter 8 : Fisher information

Part III : Reinforcement Learning using Information Theory, and other advanced topics

Chapter 9 : Reinforcement learning based on information theory (e.g., Phi-MDP, KL-UCB, AIXI), and robotics



PS: we're searching for students on various topics, from machine learning to computer vision!

Back to main page

Valid HTML 4.0 Transitional