pricegpu

Best Cloud GPU for SDXL inference

Minimum 12GB VRAM · Recommended 24GB+ · Runtime: seconds-per-image

Cheapest for SDXL inference: NVIDIA GeForce RTX 3090 24GB on Salad
$0.130/hr/hr · verify on provider site
Try Salad →

Cheapest GPU Options — 25 eligible GPUs

ProviderConfigurationRegionBillingAvailabilityPrice/hr
Saladcheapest1x RTX 3090distributedper-minuteon-demand$0.130/hr Rent →
Salad1x RTX 4080distributedper-minuteon-demand$0.170/hr Rent →
vast1x RTX 3090eu-centralper-secondon-demand$0.200/hr Rent →
TensorDock1x RTX 3090eu-westper-minuteon-demand$0.220/hr Rent →
Salad1x RTX 4090distributedper-minuteon-demand$0.240/hr Rent →
vast1x RTX 4080us-westper-secondon-demand$0.280/hr Rent →
genesis1x RTX 3090eu-centralper-minuteon-demand$0.290/hr Rent →
RunPod1x RTX 3090us-eastper-secondon-demand$0.290/hr Rent →
vast1x RTX 4090eu-westper-secondon-demand$0.350/hr Rent →
TensorDock1x RTX 4080us-eastper-minuteon-demand$0.350/hr Rent →
FluidStack1x RTX 4090eu-centralper-minuteon-demand$0.440/hr Rent →
RunPod1x RTX 4090us-eastper-secondon-demand$0.440/hr Rent →
ReplicateNvidia T4 (16GB)us-eastper-secondon-demand$0.440/hr Rent →
Hyperstack1x L4 24GBuk-londonper-minuteon-demand$0.490/hr Rent →
DataCrunch1x RTX 4090eu-northper-minuteon-demand$0.490/hr Rent →

GPU Requirements

Minimum VRAM12 GB
Recommended VRAM24 GB
Ideal GPUsNVIDIA GeForce RTX 4090 24GB, NVIDIA A10 24GB
Typical Runtimeseconds-per-image
Billing Patternspiky

FAQ

What GPU do I need for SDXL inference?
Requires at least 12GB VRAM. Recommended: 24GB+. Ideal: NVIDIA GeForce RTX 4090 24GB, NVIDIA A10 24GB.
What is the cheapest GPU for SDXL inference?
NVIDIA GeForce RTX 3090 24GB at $0.130/hr/hr on Salad.
How much does SDXL inference cost per hour?
From $0.130/hr/hr. Runtime: seconds-per-image.

GPU-Specific Pages

NVIDIA H100 80GB SXM for SDXL inference
80GB VRAM
NVIDIA H100 80GB PCIe for SDXL inference
80GB VRAM
NVIDIA H200 141GB SXM for SDXL inference
141GB VRAM
NVIDIA A100 80GB SXM for SDXL inference
80GB VRAM