pricegpu

Modal GPU Cloud Pricing

from $0.590/hr/hr

Developer-first serverless GPU platform with a Python-native SDK, per-second billing, and automatic cold-start optimization for ML workloads.

Start on Modal
Try Modal →

Current Pricing — 7 configurations

ProviderConfigurationRegionBillingAvailabilityPrice/hr
Modalcheapest1x T4us-eastper-secondon-demand$0.590/hr Rent →
Modal1x RTX 4090us-eastper-secondon-demand$1.10/hr Rent →
Modal1x A10Gus-eastper-secondon-demand$1.10/hr Rent →
Modal1x L40Sus-eastper-secondon-demand$1.95/hr Rent →
Modal1x A100 PCIe 40GBus-eastper-secondon-demand$2.78/hr Rent →
Modal1x A100 SXMus-eastper-secondon-demand$3.72/hr Rent →
Modal1x H100 SXMus-eastper-secondon-demand$3.95/hr Rent →

Provider Details

Founded2021
Billingper-second
Regionsus-east, us-west, eu-central
Featuresserverless, on-demand, auto-scaling, python-native, scheduled-jobs
Trust Score4.7/5
Websitehttps://modal.com

FAQ

What GPUs does Modal offer?
1x H100 SXM, 1x A100 SXM, 1x A100 PCIe 40GB, 1x RTX 4090, 1x L40S, 1x T4, 1x A10G
Where are Modal data centers located?
Modal operates in: us-east, us-west, eu-central.
How does Modal bill for GPU usage?
Modal supports per-second billing.
Is Modal reliable for production workloads?
Modal has a trust score of 4.7/5. Features include: serverless, on-demand, auto-scaling, python-native, scheduled-jobs.

Last data refresh: April 29, 2026. Verify on Modal's site.

Related Providers

RunPod
Consumer and data-center GPU cloud with spot and on-demand instances, a large template marketplace, and a serverless inference platform.
Fal.ai
Serverless inference platform optimized for generative media workloads (images, video, audio) with sub-second cold starts and real-time streaming.
Replicate
Serverless platform for running and hosting machine learning models via API, billed per second of GPU compute with a large community model library.
FluidStack
GPU cloud aggregator and broker offering large-scale H100 and A100 clusters for AI training at competitive rates sourced from global data centers.
TensorDock
Budget-friendly GPU marketplace aggregating data-center hardware from multiple hosts, offering broad GPU variety at competitive hourly rates.
Together AI
AI cloud focused on fast serverless inference for open-source models with per-token pricing, plus dedicated GPU instances and fine-tuning APIs.