NVIDIAH100
The NVIDIA H100 is an ideal choice for large-scale AI applications. It uses the NVIDIA Hopper architecture that combines advanced features and capabilities, accelerating AI training and inference on larger models. Starting from $1.80/hr
Available at the most cost-effective pricing
Launch your AI products faster with on-demand GPUs and a global network of data center partners
Virtual machines
The ideal deployment strategy for AI workloads with a H100.
- Up to 8 GPUs / virtual machine
- Flexible
- Network attached storage
- Private networks
- Security groups
- Images
Bare metal
Complete control over a physical machine providing absolute performance & control
Pricing available on request
Get pricing- Up to 8 GPUs / host
- No noisy neighbours
- SpectrumX local networking
- 300Gbps external connectivity
- NVMe SSD storage
Looking to scale? Please contact us for enterprise solutions.
Speak with an expertThe NVIDIA H100 is perfect for a wide range of workloads
Deploying AI based workloads on CUDO Compute is easy and cost-effective. Follow our AI related tutorials.
Specifications
Starting from | $1.80/hr |
Architecture | NVIDIA Hopper |
Form factor | SXM | NVL |
FP64 tensor core | 67 TFLOPS |
FP32 | 67 TFLOPS |
FP32 tensor core | 989 TFLOPS |
BFLOAT16 tensor core | 1,979 TFLOPS |
FP16 tensor core | 1,979 TFLOPS |
FP8 tensor core | 3,958 TFLOPS |
INT8 tensor core | 3,958 TOPS |
GPU memory | 80GB |
GPU memory bandwidth | 3.35 TB/s |
Decoders | 7x NVDEC 7x NVJPEG |
Max thermal design power (TDP) | Up to 700W |
Multi-instance GPUs (MIG) | Up to 7 MIGs @10GB each |
Interconnect | NVLink: 900GB/s PCIe Gen5: 128GB/s |
Use cases
AI inference
Engineers and scientists across various domains can leverage the NVIDIA H100 to accelerate AI inference workloads, such as image and speech recognition. The H100's powerful Tensor Cores enable it to quickly process large amounts of data, making it perfect for real-time inference applications.
Deep learning
The NVIDIA H100 Tensor Core GPU offers a diverse range of use cases for deep learning on larger models. The H100 revolutionizes deep learning for data scientists and researchers, allowing them to handle large datasets and perform complex computations for training deep neural networks.
High-performance computing
Many diverse organizations can deploy the NVIDIA H100 GPU to accelerate high-performance computing workloads, such as scientific simulations, weather forecasting, and financial modeling. With its high memory bandwidth and powerful processing capabilities, the H100 makes it easy to run workloads at every scale.
Browse alternative GPU solutions for your workloads
Access a wide range of performant NVIDIA and AMD GPUs to accelerate your AI, ML & HPC workloads
An NVIDIA preferred partner for compute
We're proud to be an NVIDIA preferred partner for compute, offering the latest GPUs and high-performance computing solutions.
Also trusted by our other key partners:
Frequently asked questions
Pricing & reservation enquiry
Enquire about access today to test the H100 GPU Cloud, or reserve your H100 Cloud on CUDO Compute for as long as you want it, with unique contracts tailored to suit your needs.
Get started today or speak with an expert...
Available Mon-Fri 9am-5pm UK time