During play, TD-Gammon examines on each turn all possible legal moves and all their possible responses (
lookahead search), feeds each resulting board position into its
evaluation function, and chooses the move that leads to the board position that got the highest score. In this respect, TD-Gammon is no different than almost any other computer board-game program. TD-Gammon's innovation was in how it learned its evaluation function. TD-Gammon's learning algorithm consists of updating the weights in its neural net after each turn to reduce the difference between its evaluation of previous turns' board positions and its evaluation of the present turn's board position—hence "
temporal-difference learning". The score of any board position is a set of four numbers reflecting the program's estimate of the likelihood of each possible game result: White wins normally, Black wins normally, White wins a gammon, Black wins a gammon. For the final board position of the game, the algorithm compares with the actual result of the game rather than its own evaluation of the board position. The core of TD-gammon is a neural network with 3 layers. • The input layer has two types of neurons. • One type codes for the board position. They are non-negative integers ranging from 0 to 15, indicating the number of White or Black checkers at each board location. There are 99 input neurons for each, totaling 198 neurons. • Another type codes for hand-crafted features previously used in
Neurogammon. These features encoded standard concepts used by human experts, such as "advanced anchor," "blockade strength," "home board strength" and the probability of a "blot" (single checker) being hit. • The hidden layer contains hidden neurons. Later versions had more of these. • The output layer contains 4 neurons, representing the network's estimate of the probability ("equity") that the current board would lead to. The 4 neurons code for: White normal win, White gammon win, Black normal win, Black gammon win. Backgammon win is so rare that Tesauro opted to not represent it. After each turn, the learning algorithm updates each weight in the neural net according to the following rule: : w_{t+1} - w_t = \alpha(Y_{t+1} - Y_t)\sum_{k=1}^{t}\lambda^{t-k} \nabla_w Y_k where: : It was found that picking small \lambda offered performance roughly equally good, and large \lambda degraded performance. Because of this, after 1992, TD-Gammon was trained with \lambda = 0, degenerating into standard TD-learning. This saved compute by a factor of 2. == Development history ==