pricegpu
pricegpu

Live cloud GPU pricing across every major provider.
15 providers · 25 GPU models · updated weekly

Compare H100 Prices → Public API

Popular GPUs — Cheapest Price

See all 25 GPUs →
GPU VRAM FP16 TFLOPS Cheapest/hr Provider
NVIDIA A100 80GB SXM 80 GB 312 $1.85/hr FluidStack Details →
NVIDIA GeForce RTX 4090 24GB 24 GB 330 $0.240/hr Salad Details →
NVIDIA L40S 48GB 48 GB 733 $0.990/hr FluidStack Details →
NVIDIA A10 24GB 24 GB 125 $0.750/hr Details →
NVIDIA L4 24GB 24 GB 242 $0.490/hr Hyperstack Details →
NVIDIA T4 16GB 16 GB 65 $0.440/hr Replicate Details →
NVIDIA H200 141GB SXM 141 GB 1979 Details →

Providers

See all 15 →
RunPod
per-second · per-hour
us-east, us-west +2
Vast.ai
per-hour
us-east, us-west +3
Lambda Labs
per-hour
us-west-1, us-west-2 +5
Paperspace
per-hour · per-month
us-east-1, us-west-2 +2
Modal
per-second
us-east, us-west +1
CoreWeave
per-hour · per-month · reserved
us-east-1, us-east-2 +3
TensorDock
per-hour
us-east, us-west +4
Genesis Cloud
per-hour · per-month
eu-west-1, eu-central-1 +1
Hyperstack
per-hour · per-month
us-east-1, eu-west-1 +1

Popular Comparisons

RunPod vs Vast.ai
Most popular comparison
H100 vs A100
When the upgrade pays off
GPU for SDXL
Stable Diffusion XL inference
GPU for 70B LLMs
Min 140GB VRAM required
Cheapest RTX 4090 for SDXL
24GB VRAM, best $/image
Lambda vs RunPod
H100 and A100 pricing

Public Pricing API

All pricing data is freely available at /api/prices.json. No auth required. Updated weekly.

GET https://pricegpu.com/api/prices.json →

Latest from the Blog

H100 vs A100: When the Upgrade Pays Off
Break-even analysis for common workloads
Spot vs On-Demand GPU Instances
When preemptible instances make sense
RTX 4090: Cloud vs Buying
Real break-even calculation
All blog posts →