NVIDIAA100
The NVIDIA A100 Tensor Core GPU is a highly advanced graphics processing unit (GPU) specially designed to accelerate deep learning workloads. The A100 delivers cloud-based acceleration available at every scale and on demand.
from $1.25/hr
from $1.50/hr

The NVIDIA A100 is perfect for a wide range of workloads
Deploying AI based workloads on CUDO Compute is easy and cost-effective. Follow our AI related tutorials.
Available at the most cost-effective pricing
Launch your AI products faster with on-demand GPUs and a global network of data center partners
Virtual machines
The ideal deployment strategy for AI workloads with a A100.
- Up to 8 GPUs / virtual machine
- Flexible
- Network attached storage
- Private networks
- Security groups
- Images
from $1.25/hr
from $1.50/hr
Bare metal
Complete control over a physical machine for more control.
- Powered by renewable energy
- Up to 8 GPUs / host
- No noisy neighbours
- SpectrumX local networking
- 300Gbps external connectivity
- NVMe SSD storage
from $12.80/hr
Enterprise
We offer a range of solutions for enterprise customers.
- Powerful GPU clusters
- Scalable data center colocation
- Large quantities of GPUs and hardware
- Optimise to your requirements
- Expert installation
- Scale as your demand grows
Specifications
Browse specifications for the NVIDIA A100 GPU
Starting from | $1.25/hr |
Architecture | NVIDIA Ampere |
FP64 | 9.7 TFLOPS |
FP64 Tensor Core | 19.5 TFLOPS |
FP32 | 19.5 TFLOPS |
Tensor Float 32 (TF32) | 156 TFLOPS | 312 TFLOPS* |
BFLOAT16 Tensor Core | 312 TFLOPS | 624 TFLOPS* |
FP16 Tensor Core | 312 TFLOPS | 624 TFLOPS* |
INT8 Tensor Core | 624 TOPS | 1248 TOPS* |
GPU Memory | 80GB HBM2e |
GPU Memory Bandwidth | 1,935 GB/s | 2,039 GB/s |
Max Thermal Design Power (TDP) | 300W | 400W *** |
Multi-Instance GPU | Up to 7 MIGs @ 10GB |
Form Factor | PCIe |
Interconnect | NVIDIA® NVLink® Bridge for 2 GPUs: 600 GB/s |
PCIe Gen4 | 64 GB/s |
NVLink | 600 GB/s |
Ideal uses cases for the NVIDIA A100 GPU
Explore uses cases for the NVIDIA A100 including Natural language processing, Scientific research & simulation, Content creation.
Natural language processing
Natural language processing (NLP) tasks are computationally demanding, especially on large language models. The NVIDIA A100 GPU eliminates the complexity of building, training and running machine learning workloads, delivering cost-effective performance at any scale.
Scientific research & simulation
Researchers, engineers, and analysts can utilize the NVIDIA A100 cloud-based GPU to accelerate simulation workloads. The A100 delivers unparalleled acceleration to run AI workloads, perform HPC simulation & modelling, and accelerate the database at any scale.
Content creation
Content creators, including artists, designers, and engineers, use the NVIDIA A100 GPU to generate intricate visualizations, simulations, and animations. The A100 seamlessly renders lifelike graphics, ensuring accurate simulation results and greater performance at an incredibly low price point.
Browse alternative GPU solutions for your workloads
Access a wide range of performant NVIDIA and AMD GPUs to accelerate your AI, ML & HPC workloads
An NVIDIA preferred partner for compute
We're proud to be an NVIDIA preferred partner for compute, offering the latest GPUs and high-performance computing solutions.
Also trusted by our other key partners:
Frequently asked questions
Talk to sales
Reserve GPUs. Access a A100 GPU Cloud alongside other high performance models for as long as you need it.
Deployment & scaling. Seamless deployment alongside expert installation, ready to scale as your demands grow.
"CUDO Compute is a true pioneer in aggregating the world's cloud in a sustainable way, enabling service providers like us to integrate with ease"
Get started today or speak with an expert...
Available Mon-Fri 9am-5pm UK time