The petaFLOPS barrier was first broken by the
RIKEN MDGRAPE-3 supercomputer in 2006, and then on 16 September 2007 by the
distributed computing Folding@home project.
IBM's single petascale system, the
Roadrunner, entered operation in 2008. The
Roadrunner, built by
IBM, had a sustained performance of 1.026 petaFLOPS. The
Jaguar became the next computer to break the petaFLOPS milestone, later in 2008, and reached a performance of 1.759 petaFLOPS after a 2009 update. In 2020,
Fugaku became the fastest supercomputer in the world, reaching 415 petaFLOPS in June 2020. Fugaku later achieved an Rmax of 442 petaFLOPS in November of the same year. In 2022,
exascale computing (1018 FLOPS of computational power) overtook petascale computing in terms of power with the development of
Frontier, surpassing Fugaku with an Rmax of 1.102 exaFLOPS in June 2022.
Artificial intelligence Modern
artificial intelligence (AI) systems require large amounts of computational power to train model parameters.
OpenAI employed 25,000
Nvidia A100 GPUs to train
GPT-4, using a total of 133 septillion floating-point operations. == See also ==