Machine Learning II course:

Information Theory & Reinforcement Learning



General infomation:
*** Bring your laptop! ***

(click on 'details' to show/hide summary, exercises, references, etc.)


Part I : Reinforcement Learning

Chapter 1 : Introduction, Bandits, and Combination of Experts for time series prediction
Chapter 2 : Learning dynamics (Bellman equation, Dynamic Programming, Monte Carlo, Temporal Difference(0), Q-learning, Sarsa)
Chapter 3 : Learning dynamics II (Eligibility traces, TD(lambda), generalization and function approximation, example with Atari player)

Part II : Information Theory

Chapter 4 : Entropy
Chapter 5 : Compression/Prediction/Generation equivalence
Chapter 6 : Kolmogorov complexity
Chapter 7 : Fisher information

Part III : Reinforcement Learning using Information Theory, and other advanced topics

Chapter 8 : Monte Carlo Tree Search (minimax trees, alpha-beta pruning, Upper Confidence Tree, applied to Go with CrazyStone/MoGo/AlphaGo) + Phi-MDP
Chapter 9 : Reinforcement learning based on information theory (e.g., KL-UCB, AIXI), and robotics



PS: we're searching for students on various topics, from machine learning to computer vision!

Back to main page

Valid HTML 4.0 Transitional