NVIDIAH100
The NVIDIA H100 is an ideal choice for large-scale AI applications. It uses the NVIDIA Hopper architecture that combines advanced features and capabilities, accelerating AI training and inference on larger models.
from $1.80/hr
from $2.45/hr

The NVIDIA H100 is perfect for a wide range of workloads
Deploying AI based workloads on CUDO Compute is easy and cost-effective. Follow our AI related tutorials.
Available at the most cost-effective pricing
Launch your AI products faster with on-demand GPUs and a global network of data center partners
Virtual machines
The ideal deployment strategy for AI workloads with a H100.
- Up to 8 GPUs / virtual machine
- Flexible
- Network attached storage
- Private networks
- Security groups
- Images
from $1.80/hr
from $2.45/hr
Bare metal
Complete control over a physical machine for more control.
- Powered by renewable energy
- Up to 8 GPUs / host
- No noisy neighbours
- SpectrumX local networking
- 300Gbps external connectivity
- NVMe SSD storage
from $23.12/hr
Enterprise
We offer a range of solutions for enterprise customers.
- Powerful GPU clusters
- Scalable data center colocation
- Large quantities of GPUs and hardware
- Optimise to your requirements
- Expert installation
- Scale as your demand grows
Specifications
Browse specifications for the NVIDIA H100 GPU
Starting from | $1.80/hr |
Architecture | NVIDIA Hopper |
Form factor | SXM | NVL |
FP64 tensor core | 67 TFLOPS |
FP32 | 67 TFLOPS |
FP32 tensor core | 989 TFLOPS |
BFLOAT16 tensor core | 1,979 TFLOPS |
FP16 tensor core | 1,979 TFLOPS |
FP8 tensor core | 3,958 TFLOPS |
INT8 tensor core | 3,958 TOPS |
GPU memory | 80GB |
GPU memory bandwidth | 3.35 TB/s |
Decoders | 7x NVDEC 7x NVJPEG |
Max thermal design power (TDP) | Up to 700W |
Multi-instance GPUs (MIG) | Up to 7 MIGs @10GB each |
Interconnect | NVLink: 900GB/s PCIe Gen5: 128GB/s |
Ideal uses cases for the NVIDIA H100 GPU
Explore uses cases for the NVIDIA H100 including AI inference, Deep learning, High-performance computing.
AI inference
Engineers and scientists across various domains can leverage the NVIDIA H100 to accelerate AI inference workloads, such as image and speech recognition. The H100's powerful Tensor Cores enable it to quickly process large amounts of data, making it perfect for real-time inference applications.
Deep learning
The NVIDIA H100 Tensor Core GPU offers a diverse range of use cases for deep learning on larger models. The H100 revolutionizes deep learning for data scientists and researchers, allowing them to handle large datasets and perform complex computations for training deep neural networks.
High-performance computing
Many diverse organizations can deploy the NVIDIA H100 GPU to accelerate high-performance computing workloads, such as scientific simulations, weather forecasting, and financial modeling. With its high memory bandwidth and powerful processing capabilities, the H100 makes it easy to run workloads at every scale.
Browse alternative GPU solutions for your workloads
Access a wide range of performant NVIDIA and AMD GPUs to accelerate your AI, ML & HPC workloads
An NVIDIA preferred partner for compute
We're proud to be an NVIDIA preferred partner for compute, offering the latest GPUs and high-performance computing solutions.
Also trusted by our other key partners:
Frequently asked questions
Talk to sales
Reserve GPUs. Access a H100 GPU Cloud alongside other high performance models for as long as you need it.
Deployment & scaling. Seamless deployment alongside expert installation, ready to scale as your demands grow.
"CUDO Compute is a true pioneer in aggregating the world's cloud in a sustainable way, enabling service providers like us to integrate with ease"
Get started today or speak with an expert...
Available Mon-Fri 9am-5pm UK time