NVIDIA H100 Tensor Core GPU

orm FactorH100 SXMH100 PCIeH100 NVL1
FP6434 teraFLOPS26 teraFLOPS68 teraFLOPs
FP64 Tensor Core67 teraFLOPS51 teraFLOPS134 teraFLOPs
FP3267 teraFLOPS51 teraFLOPS134 teraFLOPs
TF32 Tensor Core989 teraFLOPS2756 teraFLOPS21,979 teraFLOPs2
BFLOAT16 Tensor Core1,979 teraFLOPS21,513 teraFLOPS23,958 teraFLOPs2
FP16 Tensor Core1,979 teraFLOPS21,513 teraFLOPS23,958 teraFLOPs2
FP8 Tensor Core3,958 teraFLOPS23,026 teraFLOPS27,916 teraFLOPs2
INT8 Tensor Core3,958 TOPS23,026 TOPS27,916 TOPS2
GPU memory80GB80GB188GB
GPU memory bandwidth3.35TB/s2TB/s7.8TB/s3
Decoders7 NVDEC
7 JPEG
7 NVDEC
7 JPEG
14 NVDEC
14 JPEG
Max thermal design power (TDP)Up to 700W (configurable)300-350W (configurable)2x 350-400W
(configurable)
Multi-Instance GPUsUp to 7 MIGS @ 10GB eachUp to 14 MIGS @ 12GB
each
Form factorSXMPCIe
dual-slot air-cooled
2x PCIe
dual-slot air-cooled
InterconnectNVLink: 900GB/s PCIe Gen5: 128GB/sNVLink: 600GB/s
PCIe Gen5: 128GB/s
NVLink: 600GB/s
PCIe Gen5: 128GB/s
Server optionsNVIDIA HGX H100 Partner and NVIDIA-Certified Systems with 4 or 8 GPUs NVIDIA DGX H100 with 8 GPUsPartner and
NVIDIA-Certified Systems
with 1–8 GPUs
Partner and
NVIDIA-Certified Systems
with 2-4 pairs
NVIDIA AI EnterpriseAdd-onIncludedIncluded


1. Preliminary specifications. May be subject to change. Specifications shown for 2x H100 NVL PCIe cards paired with NVLink Bridge.
2. With sparsity.
3. Aggregate HBM bandwidth.

Take a deep dive into the NVIDIA Hopper architecture.


Inquiry