Home

ACCESS Operations API

v3.48.0

CiDeR Resource Detail

All Resources

As of: 2025-01-20 12:27 UTC

Descriptive Name PSC Bridges-2 GPU (Bridges-2 GPU)
Info ResourceID bridges2-gpu.psc.access-ci.org
CiDeR Type Compute
Latest Status production from 2021-02-11 till 2025-09-30
Current Statuses production
CiDeR ID 646
SiteID psc.access-ci.org
Description Bridges-2 combines high-performance computing (HPC), high performance artificial intelligence (HPAI), and large-scale data management to support simulation and modeling, data analytics, community data, and complex workflows. Bridges-2 Accelerated GPU (GPU) nodes are optimized for scalable artificial intelligence (AI; deep learning). They are also available for accelerated simulation and modeling applications. Each Bridges-2 GPU node contains 8 NVIDIA Tesla V100-32GB SXM2 GPUs, for aggregate performance of 1Pf/s mixed-precision tensor, 62.4Tf/s fp64, and 125Tf/s fp32, combined with 256GB of HBM2 memory per node to support training large models and big data. Each NVIDIA Tesla V100-32GB SXM2 has 640 tensor cores that are specifically designed to accelerate deep learning, with peak performance of over 125Tf/s for mixed-precision tensor operations. In addition, 5,120 CUDA cores support broad GPU functionality, with peak floating-point performance of 7.8Tf/s fp64 and 15.7Tf/s fp32. 32GB of HBM2 (high-bandwidth memory) delivers 900 GB/s of memory bandwidth to each GPU. NVLink 2.0 interconnects the GPUs at 50GB/s per link, or 300GB/s per GPU. Each Bridges-2 GPU node provides a total of 40,960 CUDA cores and 5,120 tensor cores per node. In addition, each node holds 2 Intel Xeon Gold 6248 CPUs; 512GB of DDR4-2933 RAM; and 7.68TB NVMe SSD. The nodes are connected to Bridges-2's other compute nodes and its Ocean parallel filesystem and archive by two HDR-200 InfiniBand links, providing 400Gbps of bandwidth to enhance scalability of deep learning training.
Recommended Use
Access Description
Affiliation ACCESS
Updated At 2025-01-06T18:47:00.925000Z