pricegpu

RunPod GPU Cloud Pricing

from $0.290/hr/hr

Consumer and data-center GPU cloud with spot and on-demand instances, a large template marketplace, and a serverless inference platform.

Start on RunPod
Try RunPod →

Current Pricing — 8 configurations

ProviderConfigurationRegionBillingAvailabilityPrice/hr
RunPodcheapest1x RTX 3090us-eastper-secondon-demand$0.290/hr Rent →
RunPod1x RTX 4090us-eastper-secondon-demand$0.440/hr Rent →
RunPod1x A40us-eastper-secondon-demand$0.590/hr Rent →
RunPod1x L40Sus-eastper-secondon-demand$1.14/hr Rent →
RunPod1x A100 PCIe 80GBus-eastper-secondon-demand$1.64/hr Rent →
RunPod1x A100 SXMus-eastper-secondon-demand$1.89/hr Rent →
RunPod1x H100 PCIeus-eastper-secondon-demand$1.99/hr Rent →
RunPod1x H100 SXMus-eastper-secondon-demand$2.49/hr Rent →

Provider Details

Founded2022
Billingper-second, per-hour
Regionsus-east, us-west, eu-central, asia-pacific
Featuresspot, on-demand, serverless, persistent-storage, custom-templates
Trust Score4.5/5
Websitehttps://www.runpod.io

FAQ

What GPUs does RunPod offer?
1x H100 SXM, 1x H100 PCIe, 1x A100 SXM, 1x A100 PCIe 80GB, 1x RTX 4090, 1x L40S, 1x RTX 3090, 1x A40
Where are RunPod data centers located?
RunPod operates in: us-east, us-west, eu-central, asia-pacific.
How does RunPod bill for GPU usage?
RunPod supports per-second and per-hour billing.
Is RunPod reliable for production workloads?
RunPod has a trust score of 4.5/5. Features include: spot, on-demand, serverless, persistent-storage, custom-templates.

Last data refresh: April 29, 2026. Verify on RunPod's site.

Related Providers

TensorDock
Budget-friendly GPU marketplace aggregating data-center hardware from multiple hosts, offering broad GPU variety at competitive hourly rates.
Modal
Developer-first serverless GPU platform with a Python-native SDK, per-second billing, and automatic cold-start optimization for ML workloads.
Vast.ai
Peer-to-peer GPU marketplace that aggregates idle hardware from independent hosts, offering some of the lowest per-hour rates available.
FluidStack
GPU cloud aggregator and broker offering large-scale H100 and A100 clusters for AI training at competitive rates sourced from global data centers.
Fal.ai
Serverless inference platform optimized for generative media workloads (images, video, audio) with sub-second cold starts and real-time streaming.
Together AI
AI cloud focused on fast serverless inference for open-source models with per-token pricing, plus dedicated GPU instances and fine-tuning APIs.