On November 19, 2019, the DeepMind team released a
preprint introducing MuZero.
Derivation from AlphaZero MuZero (MZ) is a combination of the high-performance planning of the AlphaZero (AZ)
algorithm with approaches to model-free reinforcement learning. The combination allows for more efficient training in classical planning regimes, such as Go, while also handling domains with much more complex inputs at each stage, such as visual video games. MuZero was derived directly from AZ code, sharing its rules for setting
hyperparameters. Differences between the approaches include: • AZ's planning process uses a
simulator. The simulator knows the rules of the game. It has to be explicitly programmed. A
neural network then predicts the policy and value of a future position. Perfect knowledge of game rules is used in modeling state transitions in the search tree, actions available at each node, and termination of a branch of the tree. MZ does not have access to the rules, and instead learns one with neural networks. • AZ has a single model for the game (from board state to predictions); MZ has separate models for
representation of the current state (from board state into its internal embedding),
dynamics of states (how actions change representations of board states), and
prediction of policy and value of a future position (given a state's representation). • MZ's hidden model may be complex, and it may turn out it can host computation; exploring the details of the hidden model in a trained instance of MZ is a topic for future exploration. • MZ does not expect a two-player game where winners take all. It works with standard reinforcement-learning scenarios, including single-agent environments with continuous intermediate rewards, possibly of arbitrary magnitude and with time discounting. AZ was designed for two-player games that could be won, drawn, or lost.
Comparison with R2D2 The previous state of the art technique for learning to play the suite of Atari games was R2D2, the Recurrent Replay Distributed DQN. MuZero surpassed both R2D2's mean and median performance across the suite of games, though it did not do better in every game. == Training and results ==