New Delhi: AMD has launched Intuition MI100 accelerator – touted because the worlds quickest HPC GPU and the primary x86 server GPU for scientific analysis.
“Supported by new accelerated compute platforms from Dell, GIGABYTE, HPE, and Supermicro, the MI100, mixed with AMD EPYC™ CPUs and the ROCm four.zero open software program platform, is designed to propel new discoveries forward of the exascale period,” an organization assertion stated.
Constructed on the brand new AMD CDNA structure, the AMD Intuition MI100 GPU allows a brand new class of accelerated methods for HPC and AI when paired with 2nd Gen AMD EPYC processors.
The MI100 affords as much as 11.5 TFLOPS of peak FP64 efficiency for HPC and as much as 46.1 TFLOPS peak FP32 Matrix efficiency for AI and machine studying workloads2. With new AMD Matrix Core expertise, the MI100 additionally delivers a virtually 7x enhance in FP16 theoretical peak floating level efficiency for AI coaching workloads in comparison with AMD’s prior technology accelerators.
Extremely-Quick HBM2 Reminiscence options 32GB Excessive-bandwidth HBM2 reminiscence at a clock charge of 1.2 GHz and delivers an ultra-high 1.23 TB/s of reminiscence bandwidth to help giant information units and assist get rid of bottlenecks in transferring information out and in of reminiscence, the corporate stated.
Key capabilities of AMD Intuition MI100 accelerator
- Engineered to energy AMD GPUs for the exascale period and on the coronary heart of the MI100 accelerator, the AMD CDNA structure affords distinctive efficiency and energy effectivity
- Delivers trade main 11.5 TFLOPS peak FP64 efficiency and 23.1 TFLOPS peak FP32 efficiency, enabling scientists and researchers throughout the globe to speed up discoveries in industries together with life sciences, vitality, finance, lecturers, authorities, protection and extra.
- Supercharged efficiency for a full vary of single and combined precision matrix operations, equivalent to FP32, FP16, bFloat16, Int8 and Int4, engineered to spice up the convergence of HPC and AI.
- Intuition MI100 supplies ~2x the peer-to-peer (P2P) peak I/O bandwidth over PCIe® four.zero with as much as 340 GB/s of combination bandwidth per card with three AMD Infinity Cloth Hyperlinks.four In a server, MI100 GPUs will be configured with as much as two fully-connected quad GPU hives, every offering as much as 552 GB/s of P2P I/O bandwidth for quick information sharing.
- Options 32GB Excessive-bandwidth HBM2 reminiscence at a clock charge of 1.2 GHz and delivers an ultra-high 1.23 TB/s of reminiscence bandwidth to help giant information units and assist get rid of bottlenecks in transferring information out and in of reminiscence.
- Designed with the newest PCIe Gen four.zero expertise help offering as much as 64GB/s peak theoretical transport information bandwidth from CPU to GPU.
The AMD Intuition MI100 accelerators are anticipated by finish of the 12 months in methods from main OEM and ODM companions within the enterprise markets.