NSF grants for computing, data, and scientific visualization resources are allocated to researchers who investigate the Earth system through simulation. The current HPC environment includes two
petascale supercomputers, data analysis and visualization
servers, an operational weather forecasting system, an experimental supercomputing architecture platform, a centralized
file system, a data storage resource, and an archive of historical research data. All computing and support systems required for scientific
workflows are attached to the shared, high-speed, central file system to improve scientific productivity and reduce costs by analyzing and visualizing their data files in place at the NWSC.
Supercomputer: Yellowstone In 2012, the
Yellowstone supercomputer was installed in the NWSC as its inaugural HPC resource. Yellowstone is an IBM
iDataPlex cluster consisted of 72,576
Intel Sandy Bridge EP
processor cores in 4,536 16-core
nodes, each with 32
gigabytes of memory. All nodes are interconnected with a full
fat tree Mellanox FDR InfiniBand network. Yellowstone has a peak performance of 1.504
petaflops and has demonstrated a computational capability of 1.2576 petaflops as measured by the High-Performance
LINPACK (HPL) benchmark. It debuted as the world's 13th fastest computer in the November 2012 ranking by the
TOP500 organization. Also in November 2012, Yellowstone debuted as the 58th most energy efficient supercomputer in the world by operating at 875.34
megaflops per watt as ranked by the
Green500 organization.
Supercomputer: Cheyenne Becoming operational in 2017, the 5.34-petaflops
Cheyenne supercomputer is currently providing more than three times the computational capacity of Yellowstone. Cheyenne is an SGI ICE XA system with 4,032 dual-socket scientific computation nodes running 145,152, 18-core 2.3-GHz Intel Xeon E5-2697v4 (Broadwell) processing cores and has 315 terabytes of memory. Interconnecting these nodes is a Mellanox EDR InfiniBand network with 9-D enhanced hypercube topology that performs with a latency of only 0.5 microsecond. Cheyenne runs the
SUSE Linux Enterprise Server 12 SP1 operating system. Similar to Yellowstone, Cheyenne's design and configuration will provide balanced I/O and exceptional computational capacity for the data-intensive needs of its user community. Cheyenne debuted as the world's 20th most powerful computer in the November 2016
Top500 ranking. Cheyenne was scheduled to go offline on December 31, 2023.
Supercomputer: Derecho A new super computer was announced on January 27, 2021, capable of 20 quadrillion calculations per second, said to be 3.5 times faster than the current system in operation at the facility. Scheduled to be operational in early 2022, it officially launched on July 7, 2023. Derecho is a 19.87 petaflop HPE Cray System. It has 2488 homogenous (CPU) compute nodes and 82 heterogenous (GPU) compute nodes, 382 A100 GPU's, and 692 terabytes of total memory.
Data analysis and visualization clusters: Geyser and Caldera The Geyser and Caldera clusters are specialized data analysis and visualization resources within the data-centric Yellowstone environment. The Geyser data analysis server is a 640-core cluster of 16 nodes, each with 1
terabyte of memory. With its large per-node memory, Geyser is designed to facilitate large-scale data analysis and
post-processing tasks, including 3D visualization, with applications that do not support
distributed-memory parallelism. The Caldera computational cluster has 256 cores in 16 nodes, each with 64 gigabytes of memory and two
Graphics Processing Units (GPUs) for use as either computational processors or graphics accelerators. Caldera's two
NVIDIA Tesla GPUs per node support parallel processing, visualization activities, and development and testing of
general-purpose GPU (GPGPU) code.
Operational forecasting system for Antarctic weather: Erebus The center also houses a separate, smaller IBM iDataPlex cluster named Erebus to support the operational forecasts of the
NSF Office of Polar Programs'
Antarctic Mesoscale Prediction System (AMPS). Erebus has 84 nodes similar to Yellowstone's, an FDR-10 InfiniBand interconnect, and a dedicated 58-terabyte file system. If needed, Yellowstone will run Erebus' daily weather forecasts for the Antarctic continent to ensure that the worldwide community of users receives these forecasts without interruption.
Experimental supercomputing architecture platform: Pronghorn Pronghorn's architecture has promise for meeting the Earth system sciences' demanding requirements for data analysis, visualization, and GPU-assisted computation. As part of a partnership between Intel, IBM, and NCAR, this exploratory system is being used to evaluate the effectiveness of the Xeon Phi coprocessor's
Many Integrated Core (MIC) architecture for running climate, weather, and other environmental applications. If these coprocessors prove beneficial to key NCAR applications, they can be easily added to the standard IBM iDataPlex nodes in Yellowstone as a cost-effective way to extend its capabilities. Pronghorn has 16 dual-socket IBM x360 nodes featuring Intel's Xeon Phi 5110P coprocessors and 2.6-gigahertz Intel Sandy Bridge (Xeon E5-2670) cores. The system has 64 gigabytes of
DDR3-1600 memory per node (63 GB usable memory per node) and is interconnected with a full fat tree Mellanox FDR InfiniBand network.
Centralized file system: GLADE Geyser, Caldera, and Yellowstone all mount the central file system named GLobally Accessible Data Environment (GLADE), which provides work spaces common to all HPC resources at NWSC for computation, analysis, and visualization. This allows users to analyze data files in place, without sending large amounts of data across a network or creating duplicate copies in multiple locations. GLADE provides centralized high-performance file systems spanning supercomputing, data post-processing, data analysis, visualization, and HPC-based data transfer services. GLADE also hosts data from NCAR's
Research Data Archive, NCAR's Community Data Portal, and the
Earth System Grid that curates
CMIP5/
AR5 data. The GLADE central disk resource has a usable storage capacity of 36 petabytes as of February 2017. GLADE has a sustainable aggregate I/O
bandwidth of more than 220
gigabits per second.
Data archival service: HPSS Archival data storage at the NWSC is provided by a
High Performance Storage System (HPSS) that consists of tape libraries with storage capacity of 320 petabytes. These scalable, robotic systems consist of six
Oracle StorageTek SL8500 tape libraries using T10000C tape drives with an I/O rate of 240
megabits per second.
Climate data archive: RDA NWSC's data-intensive computing strategy includes a full suite of community data services. NCAR develops data products and services that address the future challenges of data growth, preservation, and management. The
Research Data Archive (RDA) contains a large collection of meteorological and oceanographic datasets that support scientific studies in climate, weather, hydrology, Earth system modeling, and other related sciences. It is an open resource; the global research community also uses it. == Educational projects ==