AMDMI250/300

In comparison to traditional GPUs, cloud-based AMD MI250 and MI300 GPUs offer enhanced flexibility and scalability. Developers can easily spin up or down instances to match their changing HPC processing needs without having to worry about hardware maintenance or upgrades. Cloud-based GPUs also offer lower costs and faster deployment, allowing developers to focus on their core AI/ML development work rather than managing hardware.

Get pricing Contact sales
+44 20 8050 7646

Available Mon-Fri 9am-5pm UK time

Available at the most cost-effective pricing

Launch your AI products faster with on-demand GPUs and a global network of data center partners

Virtual machines

The ideal deployment strategy for AI workloads with a MI250/300.

  1. Up to 8 GPUs / virtual machine
  2. Flexible
  3. Network attached storage
  4. Private networks
  5. Security groups
  6. Images

Enterprise

We offer a range of solutions for enterprise customers.

  1. Powerful GPU clusters
  2. Scalable data center colocation
  3. Large quantities of GPUs and hardware
  4. Optimise to your requirements
  5. Expert installation
  6. Scale as your demand grows

Specifications

Browse specifications for the AMD MI250/300 GPU

Starting from Contact us for pricing
Architecture CDNA2 | CDNA3
NameAMD Instinct™ MI250 | AMD Instinct™ MI300X
FamilyInstinct
SeriesMI200 Series | MI300 Series
LithographyTSMC 6nm FinFET | TSMC 5nm | 6nm FinFET
Stream Processors13,312 | 19,456
Compute Units208 | 304
Peak Engine Clock1700 MHz | 2100 MHz
Peak Half Precision (FP16) Performance362.1 TFLOPs | 1.3 PFLOPs
Peak Single Precision Matrix (FP32) Performance90.5 TFLOPs | 163.4 TFLOPs
Peak Double Precision Matrix (FP64) Performance90.5 TFLOPs | 163.4 TFLOPs
Peak Single Precision (FP32) Performance45.3 TFLOPs | 163.4 TFLOPs
Peak Double Precision (FP64) Performance45.3 TFLOPs | 81.7 TFLOPs
Peak INT4 Performance362.1 TOPs | N/A
Peak INT8 Performance362.1 TOPs | 2.6 POPs
Peak bfloat16362.1 TFLOPs | 1.3 PFLOPs
Thermal Design Power (TDP)500W | 560W Peak | 750W Peak
GPU Memory128 GB HBM2e | 192 GB HBM3
Memory Interface8192-bit | 8192-bit
Memory Clock1.6 GHz | 5.2 GHz
Peak Memory Bandwidth3.2 TB/s | 5.3 TB/s
Memory ECC SupportYes (Full-Chip)
GPU Form FactorOAM Module
Bus TypePCIe® 4.0 x16 | PCIe® 5.0 x16
Infinity Fabric™ Links8
Peak Infinity Fabric™ Link Bandwidth100 GB/s | 128 GB/s
CoolingPassive OAM
Supported TechnologiesAMD CDNA™ 2 Architecture, AMD ROCm™ - Ecosystem without Borders, AMD Infinity Architecture | AMD CDNA™ 3 Architecture, AMD ROCm™ - Ecosystem without Borders, AMD Infinity Architecture
RAS SupportYes
Page RetirementYes
Page AvoidanceN/A | Yes
SR-IOVN/A | Yes

Ideal uses cases for the AMD MI250/300 GPU

Explore uses cases for the AMD MI250/300 including Enhanced natural language processing, Faster deep learning training, Real-time video analytics.

Enhanced natural language processing

Utilise the efficient architecture of AMD MI250 and MI300 GPUs to accelerate natural language processing tasks such as text classification, sentiment analysis, and machine translation. This allows developers to build more sophisticated chatbots, voice assistants, and other NLP-driven applications that can understand and respond to human language faster and more accurately.

Faster deep learning training

Train deep neural networks up to 4x faster with cloud-based AMD MI250 and MI300 GPUs compared to traditional CPUs. This enables developers to experiment with larger datasets and complex models, leading to improved model accuracy and better decision-making insights for customer applications.

Real-time video analytics

Deploy the parallel processing capabilities of AMD MI250 and MI300 GPUs and perform real-time video analytics in the cloud. Developers can analyse live video streams, detect objects, classify actions, and track movements with minimal latency, enabling applications such as smart surveillance and autonomous vehicles.

Browse alternative GPU solutions for your workloads

Access a wide range of performant NVIDIA and AMD GPUs to accelerate your AI, ML & HPC workloads

NVIDIA H100 SXM

NVIDIA H100 SXM

from $2.45 /hr

Deploy performant H100s on-demand with CUDO Compute.

NVIDIA H100 PCIe

NVIDIA H100 PCIe

from $2.45 /hr

Deploy performant H100s on-demand with CUDO Compute.

NVIDIA HGX B200

NVIDIA HGX B200

Pricing on request.

Scale with high performance HGX B200 GPUs on our reserved cloud.

NVIDIA GB200 NVL72

NVIDIA GB200 NVL72

Pricing on request.

Scale with high performance GB200 NVL72 GPUs on our reserved cloud.

NVIDIA A800 PCIe

NVIDIA A800 PCIe

from $0.80 /hr

Deploy performant A800s on-demand with CUDO Compute.

NVIDIA H200 SXM

NVIDIA H200 SXM

Pricing on request.

Deploy performant H200s on-demand with CUDO Compute.

NVIDIA B100

NVIDIA B100

Pricing on request.

Scale with high performance B100 GPUs on our reserved cloud.

NVIDIA A40

NVIDIA A40

from $0.39 /hr

Deploy performant A40s on-demand with CUDO Compute.

NVIDIA L40S

NVIDIA L40S

from $1.42 /hr

Deploy performant L40Ss on-demand with CUDO Compute.

NVIDIA A100 PCIe

NVIDIA A100 PCIe

from $1.50 /hr

Deploy performant A100s on-demand with CUDO Compute.

NVIDIA V100

NVIDIA V100

from $0.39 /hr

Deploy performant V100s on-demand with CUDO Compute.

NVIDIA RTX 4000 SFF Ada

NVIDIA RTX 4000 SFF Ada

Pricing on request.

Deploy performant RTX 4000 SFF Adas on-demand with CUDO Compute.

NVIDIA RTX A4000

NVIDIA RTX A4000

Pricing on request.

Scale with high performance RTX A4000 GPUs on our reserved cloud.

NVIDIA RTX A5000

NVIDIA RTX A5000

from $0.35 /hr

Deploy performant RTX A5000s on-demand with CUDO Compute.

NVIDIA RTX A6000

NVIDIA RTX A6000

from $0.45 /hr

Deploy performant RTX A6000s on-demand with CUDO Compute.

An NVIDIA preferred partner for compute

We're proud to be an NVIDIA preferred partner for compute, offering the latest GPUs and high-performance computing solutions.

Also trusted by our other key partners:

Frequently asked questions

Are you looking for support with something more specific? Check out our knowledge base

Talk to sales

  • Reserve GPUs. Access a MI250/300 GPU Cloud alongside other high performance models for as long as you need it.

  • Deployment & scaling. Seamless deployment alongside expert installation, ready to scale as your demands grow.

"CUDO Compute is a true pioneer in aggregating the world's cloud in a sustainable way, enabling service providers like us to integrate with ease"

VPS AI logo

Get started today or speak with an expert...