Since GGP AI must be designed to play multiple games, its design cannot rely on algorithms created specifically for certain games. Instead, the AI must be designed using algorithms whose methods can be applied to a wide range of games. Recent GGP systems such as Regular Boardgames (RBG) and Ludii have explored alternative rule representations to optimize reasoning efficiency and support a broader variety of games. The AI must also be an ongoing process, that can adapt to its current state rather than the output of previous states. For this reason,
open loop techniques are often most effective. The Ludii software was used to find how the ancient Roman board game
Ludus Coriovalli may have been played by referencing many similar games and using those to play possible rulesets. It was determined that it was most likely a two-player blocking game. Previously, the oldest known examples of such dated to medieval times. A popular method for developing GGP AI is the
Monte Carlo tree search (MCTS) algorithm. Often used together with the UCT method (
Upper Confidence Bound applied to Trees), variations of MCTS have been proposed to better play certain games, as well as to make it compatible with video game playing. Another variation of tree-search algorithms used is the
Directed Breadth-first Search (DBS), in which a child node to the current state is created for each available action, and visits each child ordered by highest average reward, until either the game ends or runs out of time. In each tree-search method, the AI simulates potential actions and ranks each based on the average highest reward of each path, in terms of points earned. Using these assumptions, game playing AI can be created by quantifying the player input, the game outcomes, and how the various rules apply, and using algorithms to compute the most favorable path. ==See also==