pricegpu

Cheapest NVIDIA A100 80GB SXM for LLM inference 70B

80GB VRAM · 312 TFLOPS FP16 · 80GB min for LLM inference 70B

Best price: $1.85/hr/hr on FluidStack
on-demand · us-east · verify before purchasing
Try FluidStack →

NVIDIA A100 80GB SXM Prices — 12 offers

12 offers found — from $1.85/hr as of Apr 29, 2026, verify on provider site
ProviderConfigurationRegionBillingAvailabilityPrice/hr
FluidStackcheapest1x A100 SXM 80GBus-eastper-minuteon-demand$1.85/hr Rent →
DataCrunch1x A100 SXM4 80GBeu-northper-minuteon-demand$1.89/hr Rent →
RunPod1x A100 SXMus-eastper-secondon-demand$1.89/hr Rent →
Hyperstack1x A100 SXM4 80GBuk-londonper-minuteon-demand$2.06/hr Rent →
CoreWeave1x A100 SXM4 80GBus-eastper-secondon-demand$2.21/hr Rent →
lambda1x A100 SXMus-west-2per-minuteon-demand$2.21/hr Rent →
Paperspace1x A100 SXMus-eastper-minuteon-demand$2.30/hr Rent →
together1x A100 SXM 80GBus-eastper-secondon-demand$2.49/hr Rent →
fal1x A100 SXM 80GBus-eastper-millisecondon-demand$2.99/hr Rent →
ReplicateNvidia A100 (80GB, SXM)us-eastper-secondon-demand$3.24/hr Rent →
Modal1x A100 SXMus-eastper-secondon-demand$3.72/hr Rent →
lambda8x A100 SXMus-west-2per-minuteon-demand$17.68/hr Rent →

Compatibility

GPU VRAM80 GB ✓
Minimum Required80 GB
Recommended160 GB (minimum only)
Typical Runtimetokens-per-second

FAQ

Is the NVIDIA A100 80GB SXM good for LLM inference 70B?
Yes. LLM inference 70B requires 80GB VRAM; the NVIDIA A100 80GB SXM has 80GB.
What is the cheapest NVIDIA A100 80GB SXM for LLM inference 70B?
FluidStack at $1.85/hr/hr (on-demand, us-east).

Last refresh: April 29, 2026. Verify on provider site.

Related pages

NVIDIA A100 80GB SXM pricing
All providers
Best GPU for LLM inference 70B
NVIDIA H100 80GB SXM for LLM inference 70B
80GB VRAM
NVIDIA H100 80GB PCIe for LLM inference 70B
80GB VRAM
NVIDIA H200 141GB SXM for LLM inference 70B
141GB VRAM