ACCESS Operations API |
v3.45.0 |
Hive |
Ookami |
Jetstream2 LM |
Jetstream2 GPU |
Jetstream2 CPU |
Derecho |
Derecho-GPU |
Delta CPU |
DeltaAI |
Delta GPU |
OSG |
Neocortex SDFlex |
Bridges-2 RM |
Bridges-2 EM |
Bridges-2 GPU |
Bridges-2 GPU-AI |
Anton 2 |
Anvil CPU |
Anvil GPU |
Expanse CPU |
SDSC Voyager AI System |
Expanse GPU |
TACC Stampede3 |
TAMU Launch |
FASTER |
ACES |
KYRIC |
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
advance_max_reservable_su | None | None | None | None | None | None | None | None | None | None | 0.0 | None | None | None | None | None | None | None | None | 512.0 | None | None | None | None | None | None | None |
advance_reservation_support | False | False | False | False | False | False | False | False | None | False | False | None | False | False | False | False | False | False | False | True | False | False | False | None | False | False | False |
batch_system | Slurm | SLURM | PBS Pro | PBS Pro | HTCondor | SLURM (with interactive access) | Slurm (with interactive access) | Slurm (with interactive access) | Slurm (with interactive access) | Slurm (with interactive access) | Slurm | Slurm | SLURM | SLURM | Slurm | SLURM | SLURM | SLURM | |||||||||
community_software_area | False | False | False | False | False | False | False | False | None | False | False | None | False | False | False | False | False | False | False | True | False | False | False | None | False | False | False |
cpu_count_per_node | 24 | 48 | 128 | 128 | 128 | 128 | 64 | 128 | 288 | 64 | 1 | None | 128 | 96 | 40 | 40 | None | 128 | 128 | 128 | None | 40 | 112 | None | 64 | None | 40 |
cpu_speed_ghz | None | 1.8 | 2.0 | 2.0 | 2.0 | 2.45 | 2.25 | 2.45 | None | 2.45 | 2.4 | None | 2.25 | 2.4 | 2.5 | 2.5 | None | 2.45 | 2.45 | 2.25 | None | 2.5 | None | None | 2.2 | None | 2.0 |
cpu_type | Xeon 6226 | Fujitsu A64FX-FX700 | AMD Milan 7713 | AMD Milan 7713 | AMD Milan 7713 | AMD EPYC 7763 (Milan) | AMD EPYC 7742 (Milan) | AMD Milan | NVIDIA Grace | AMD Milan | x86_64 | AMD EPYC 7742 | Intel Xeon Platinum 9260M | Intel Xeon Gold 6248 | Intel Xeon Gold 6248 | AMD Milan | AMD Milan | AMD EPYC 7742 (Rome) | Intel Xeon 6248 | Sapphire Rapids, Ice Lake, Skylake | Intel Ice Lake | Intel Sapphire Rapids | Broadwell class | ||||
disk_size_tb | None | 788.0 | None | None | None | None | None | None | None | None | 0.0 | None | None | None | None | None | None | None | None | 12000.0 | None | None | None | None | None | None | 300.0 |
display_in_xsede_su_calculator | False | True | True | True | True | False | False | True | None | True | True | None | True | True | True | False | False | True | True | True | False | True | False | None | True | True | True |
gpu_description |
(4) NVIDIA A100 40gb GPUs per nod…
e.
|
NVIDIA A100 (40 GB) | NVIDIA Hopper | NVIDIA A40 and A100 | Various |
NVIDIA Tesla V100-32GB SXM2 each …
have 640 tensor cores that are specifically designed to accelerate deep learning, with peak performance of over 125Tf/s for mixed-precision tensor operations. In addition, 5,120 CUDA cores support broad GPU functionality, with peak floating-point performance of 7.8Tf/s fp64 and 15.7Tf/s fp32. 32GB of HBM2 (high-bandwidth memory) delivers 900 GB/s of memory bandwidth to each GPU, and NVLink 2.0 interconnects the GPUs at 50GB/s per link, or 300GB/s per GPU.Each Bridges-2 GPU node contains 8 NVIDIA Tesla V100-32GB SXM2 GPUs, for aggregate performance of 1Pf/s mixed-precision tensor, 62.4Tf/s fp64, and 125Tf/s fp32, combined with 256GB of HBM2 memory to support training large models and big data.
|
NVIDIA Tesla V100-32GB SXM2 each …
have 640 tensor cores that are specifically designed to accelerate deep learning, with peak performance of over 125Tf/s for mixed-precision tensor operations. In addition, 5,120 CUDA cores support broad GPU functionality, with peak floating-point performance of 7.8Tf/s fp64 and 15.7Tf/s fp32. 32GB of HBM2 (high-bandwidth memory) delivers 900 GB/s of memory bandwidth to each GPU, and NVLink 2.0 interconnects the GPUs at 50GB/s per link, or 300GB/s per GPU.Each Bridges-2 GPU node contains 8 NVIDIA Tesla V100-32GB SXM2 GPUs, for aggregate performance of 1Pf/s mixed-precision tensor, 62.4Tf/s fp64, and 125Tf/s fp32, combined with 256GB of HBM2 memory to support training large models and big data.
|
NVIDIA A100 |
Each node has four NVIDIA V100s (…
32 GB SMX2), connected via NVLINK.
|
Ponte Vecchio (PVC) | NVIDIA A100, A40, A30, A10, and T4 | ||||||||||||||||
graphics_card | 4 GPUs per node, 128 GB HBM 2e | ||||||||||||||||||||||||||
info_resourceid | hive.gatech.access-ci.org | ookami.sbu.access-ci.org | jetstream2-lm.indiana.access-ci.org |
jetstream2-gpu.indiana.access-ci.…
org
|
jetstream2.indiana.access-ci.org | derecho.ncar.ucar.edu | derecho-gpu.ncar.ucar.edu | delta-cpu.ncsa.access-ci.org | deltaai.ncsa.access-ci.org | delta-gpu.ncsa.access-ci.org | grid1.osg.access-ci.org | neocortex-sdflex.psc.access-ci.org | bridges2-rm.psc.access-ci.org | bridges2-em.psc.access-ci.org | bridges2-gpu.psc.access-ci.org | bridges2-gpu-ai.psc.access-ci.org | anton2.psc.access-ci.org | anvil.purdue.access-ci.org | anvil-gpu.purdue.access-ci.org | expanse.sdsc.access-ci.org | voyager.sdsc.access-ci.org | expanse-gpu.sdsc.access-ci.org | stampede3.tacc.access-ci.org | launch.tamu.access-ci.org | faster.tamu.access-ci.org | aces.tamu.access-ci.org | kyric.uky.access-ci.org |
interconnect | Mellanox HDR100 | Ethernet (100GigE) | Ethernet (100GigE) | Ethernet (100GigE) | Slingshot-11 | Slingshot-11 | Slingshot | Slingshot | Slingshot | None | HDR-200 InfiniBand | Dual-rail HDR-200 InfiniBand | Dual-rail HDR-200 InfiniBand | Dual-rail HDR-200 InfiniBand | 100 Gbps Infiniband | 100 Gbps Infiniband |
Hybrid Fat-Tree, HDR100 InfiniBan…
d nodes, HDR200 switches
|
Omni-Path (OPA) | HDR 100 InfiniBand | NVIDIA NDR200 | Ethernet | ||||||
job_manager | Slurm | SLURM | PBS Pro | PBS Pro | slurm | slurm | Slurm | condor | SLURM (with interactive access) | Slurm (with interactive access) | Slurm (with interactive access) | Slurm (with interactive access) | Slurm (with interactive access) | Slurm | Slurm | Kubernetes | Slurm | ||||||||||
local_storage_per_node_gb | None | 512.0 | None | None | None | None | None | 800.0 | 3500.0 | 1700.0 | 10.0 | None | 3840.0 | 7680.0 | 7680.0 | 7680.0 | None | None | None | 1024.0 | None | 1600.0 | 150.0 | None | 960.0 | None | 6.0 |
machine_type | Cluster | Cluster | None | Cluster | None | MPP | MPP | Cluster | Cluster | Cluster | Cluster | Cluster | Cluster | Cluster | Cluster | Cluster | MPP | Cluster | Cluster | Cluster | Cluster | Cluster | Cluster | None | Cluster | Cluster | Cluster |
manufacturer | Cray | Dell | Dell | Dell | HPE | HPE | HPE | HPE | HPE | Various | HPE | HPE | HPE | HPE | HPE | Dell | Dell | Dell | Dell | Dell | Dell | Dell | Dell | ||||
max_reservable_su | None | None | None | None | None | None | None | None | None | None | 0.0 | None | None | None | None | None | None | None | None | 512.0 | None | None | None | None | None | None | None |
memory_per_cpu_gb | None | 32.0 | 8.0 | 4.0 | 4.0 | 2.0 | 8.0 | 2.0 | None | None | 2.0 | None | 128.0 | 1024.0 | 512.0 | 512.0 | None | 2.0 | None | 2.0 | None | 9.6 | 128.0 | None | 4.0 | None | 75.0 |
model | Apollo 80 | Cray EX | Cray EX | Apollo 2000 | Cray EX2500 | Apollo 6500 | HPE Apollo 2000 Gen 11 | HPE ProLiant DL560 Gen10 | HPE Apollo 6500 Gen 10 | HPE Apollo 6500 Gen 10 | |||||||||||||||||
nfs_network | NA | None | HDR InfiniBand | 100G Ethernet | |||||||||||||||||||||||
nickname | Hive | Ookami | Jetstream2 LM | Jetstream2 GPU | Jetstream2 CPU | Derecho | Derecho-GPU | Delta CPU | DeltaAI | Delta GPU | OSG | Neocortex SDFlex | Bridges-2 RM | Bridges-2 EM | Bridges-2 GPU | Bridges-2 GPU-AI | Anton 2 | Anvil CPU | Anvil GPU | Expanse CPU | SDSC Voyager AI System | Expanse GPU | TACC Stampede3 | TAMU Launch | FASTER | ACES | KYRIC |
node_count | 484.0 | 176.0 | 32.0 | 90.0 | 384.0 | 2.0 | 82.0 | 124.0 | 114.0 | 206.0 | 60000.0 | None | 504.0 | 4.0 | 24.0 | 24.0 | None | 1000.0 | 16.0 | 732.0 | None | 52.0 | 1848.0 | None | 180.0 | None | 5.0 |
operating_system | Rocky Linux 8.4 | User-defined per VM | User-defined per VM | User-defined per VM | Red Hat Enterprise v8 | SLES | Red Hat Enterprise v8 | CentOS | RHEL | CentOS 8 | CentOS 8 | CentOS 8 | CentOS 8 | Rocky Linux 8.4 | Rocky Linux 8.4 | Linux (CentOS) | Linux (CentOS) | Rocky Linux | Rocky Linux release 8.6 | Centos 8.2 | |||||||
parallel_file_system | None | None | None | None | None | None | None | None | None | None | No parallel file system | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None |
peak_teraflops | None | 486.0 | 131.0 | 7020.0 | 9093.0 | None | None | None | None | None | 50.0 | None | None | None | None | None | None | 5300.0 | 1500.0 | 3373.0 | None | 1664.0 | None | None | None | None | None |
platform_name | Hive Cluster | Ookami, A64FX Testbed | Indiana Jetstream2 Large Memory | Indiana Jetstream2 GPU | Indiana Jetstream2 CPU | Derecho | Derecho-GPU | Linux | HPE Apollo 2000 Gen 11 | HPE ProLiant DL560 Gen10 | HPE Apollo 6500 Gen10 | HPE Apollo 6500 Gen10 | DESRES Anton 2 | Dual AMD Milan CPUs |
Dual AMD Milan CPUs with four NVI…
DIA A100 GPUs
|
TACC Dell/Intel Sapphire Rapids, …
Ice Lake, Skylake (Stampede3)
|
TAMU ACES | ||||||||||
primary_storage_shared_gb | 788 | 14000 | 14000 | 14000 | No shared storage | 15000000 | 15000000 | 15000000 | 15000000 | ||||||||||||||||||
resource_type | Compute | Compute | Compute | Compute | Compute | Compute | Compute | Compute | Compute | Compute | Compute | Compute | Compute | Compute | Compute | Compute | Compute | Compute | Compute | Compute | Compute | Compute | Compute | Compute | Compute | Compute | Compute |
rmax | None | None | None | None | None | 10.32 | 4.58 | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None |
rpeak | None | None | 131.0 | 7020.0 | 9093.0 | 12.4 | 5.04 | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None | None |
storage_network | Infiniband | None | HDR-200 InfiniBand | Dual-rail HDR-200 InfiniBand | Dual-rail HDR-200 InfiniBand | Dual-rail HDR-200 InfiniBand | HDR InfiniBand | VAST | HDR 100 InfiniBand | 100G Ethernet | |||||||||||||||||
supports_sensitive_data | False | False | False | False | False | False | False | False | None | False | False | None | True | True | True | True | False | False | False | False | False | False | False | None | False | False | False |
xsedenet_participant | False | True | False | False | False | False | False | True | None | True | False | None | False | True | True | True | False | True | True | True | False | True | False | None | True | False | False |