Title: Life, play and win in 20 trillion moves

Author: Stuart Russell, Computer Science Division, University of California, Berkeley

Abstract

Given the impossibility of perfectly rational behaviour, the concept of bounded optimality provides a more satisfactory formal definition of intelligence; yet still we lack ideas for designing systems that can achieve reasonable decision quality over long time scales. The talk begins with a classical idea - hierarchical planning with high-level actions of extended duration - and resolves the longstanding open problem of "downward refinement," providing the first algorithms capable of proving that a high-level plan is correct and optimal without considering its concrete implementations.

The classical setting of hierarchical planning is then generalized to that of hierarchical reinforcement learning. I describe a concurrent partial-programming language, ALisp, that may be used to specify constraints on behavior, leaving unspecified those choices that the agent must learn to make on its own. ALisp comes with reinforcement learning algorithms that, in the limit, find the optimal completion of any given partial program. Initial scaling experiments are promising.

Finally, I will briefly explore the implications of this work for research on bounded rationality, metareasoning, and artificial intelligence.

[Joint work with Ron Parr, David Andre, Bhaskara Marthi, Andy Zimdars, David Latham, Carlos Guestrin, Jason Wolfe]