Skip to main content

Get Started

Get started with CoreWeave

Account setup, environment configuration, running your first workloads, and more.

Products

Compute

CoreWeave offers two compute options for running containerized workloads:

CoreWeave Kubernetes Service (CKS)

Managed Kubernetes on bare metal for training, inference, and HPC.

SUNK

Slurm on Kubernetes for batch and burst workloads.

Storage

CoreWeave storage solutions support the data requirements of AI and ML workloads:

CoreWeave AI Object Storage

S3-compatible storage for datasets, model weights, and checkpoints.

Distributed File Storage

POSIX shared filesystem for multi-node access and distributed training.

Local Storage

Ephemeral storage on GPU nodes for scratch space and caching.

Networking

CoreWeave networking products create secure, high-performance connections between your resources and services:

Virtual Private Clouds (VPCs)

Isolated networks for CKS clusters.

HPC Interconnect

GPUDirect RDMA with InfiniBand for GPU-to-GPU communication.

Direct Connect

Private links via Equinix and Megaport.

IP addresses

Public IPv4 and Bring Your Own IP.

Ingress Service

Public DNS names for services.

Platform

Instances

GPU and CPU instance specifications and how to select the right instance for your workload.

Regions and Availability Zones

Global infrastructure, regions, and availability for CoreWeave products.

Pricing and Billing

Instance, networking, and storage pricing. Invoices and billing.

Observability

CoreWeave Grafana, logs, metrics, and telemetry for monitoring workloads.

Security

IAM, access policies, and security best practices for AI workloads.

Changelog

Release notes and product updates for CoreWeave services.
Last modified on March 27, 2026