Skip to main content

Cluster Management

TensorPool makes it easy to deploy and manage GPU clusters of any size, from single GPUs to large multi-node configurations.

Core Commands

Cluster Management

  • tp cluster create - Deploy a new GPU cluster
  • tp cluster list - View all your clusters
  • tp cluster info <cluster_id> - Get detailed information about a cluster
  • tp cluster edit <cluster_id> - Edit cluster settings
  • tp cluster destroy <cluster_id> - Terminate a cluster

Creating Clusters

Deploy GPU clusters with simple commands. TensorPool supports both single-node and multi-node cluster configurations.

Single-Node Clusters

Single-node clusters are ideal for development, experimentation, and smaller training workloads. They provide direct access to GPU resources without the complexity of distributed training.

Supported Instance Types

Single-node clusters support a wide variety of GPU configurations:
# Single H100
tp cluster create -i ~/.ssh/id_ed25519.pub -t 1xH100

# Single node with 8x H200
tp cluster create -i ~/.ssh/id_ed25519.pub -t 8xH200

# Single node with 8x B200
tp cluster create -i ~/.ssh/id_ed25519.pub -t 8xB200

# Single node MI300X
tp cluster create -i ~/.ssh/id_ed25519.pub -t 1xMI300X

Accessing Single-Node Clusters

Single-node clusters provide direct SSH access. Once your cluster is ready:
# Get cluster information to find the instance ID
tp cluster info <cluster_id>

# SSH directly into the instance
tp ssh <instance_id>

# Run your training script directly on the node
python train.py

Multi-Node Clusters

Multi-node clusters are designed for distributed training workloads that require scaling across multiple machines. All multi-node clusters come with SLURM preinstalled for job scheduling and resource management.

Supported Instance Types

Multi-node support is currently available for:
  • 8xH200 - 2 or more nodes, each with 8 H200 GPUs
  • 8xB200 - 2 or more nodes, each with 8 B200 GPUs

Creating Multi-Node Clusters

Create multi-node clusters by specifying the number of nodes with the -n flag:
# 2-node cluster with 8xH200 each (16 GPUs total)
tp cluster create -i ~/.ssh/id_ed25519.pub -t 8xH200 -n 2

# 4-node cluster with 8xB200 each (32 GPUs total)
tp cluster create -i ~/.ssh/id_ed25519.pub -t 8xB200 -n 4
Multi-node support is currently available for 8xH200 and 8xB200 instance types only.

Accessing Multi-Node Clusters

All multi-node clusters come with SLURM preinstalled and configured. For detailed information about using SLURM for distributed training, see the Multi-Node Training Guide.

Cluster Architecture

Multi-node clusters use a jumphost architecture for network access. Multi-node clusters consist of:
  • Jumphost: {cluster_id}-jumphost - The SLURM login/controller node with a public IP address
  • Worker Nodes: {cluster_id}-0, {cluster_id}-1, etc. - Compute nodes with private IP addresses only

Accessing Your Cluster

Follow these steps to access your multi-node cluster:
  1. Get cluster information to see all nodes and their instance IDs:
tp cluster info <cluster_id>
  1. SSH into the jumphost (this is the only node with direct public access):
tp ssh <jumphost-instance-id>
  1. Access worker nodes from the jumphost. You can use either the instance name or private IP:
# Using instance name (replace <cluster_id> with your actual cluster ID)
ssh <cluster_id>-0
ssh <cluster_id>-1

# Or using the private IP address (found in cluster info)
ssh <worker-node-private-ip>
Note: The jumphost serves as the SLURM login node where you submit distributed training jobs. Worker nodes are only accessible from within the cluster network.

Cluster and Instance Statuses

A cluster’s status is derived from the statuses of its individual instances. Each instance within a cluster progresses through its own lifecycle, and the cluster’s displayed status reflects the highest-priority status among all its instances.

Instance Status Lifecycle

Each instance in a cluster follows this lifecycle:

Status Definitions

StatusDescription
PENDINGInstance creation request has been submitted and is being queued for provisioning.
PROVISIONINGInstance has been allocated and is being provisioned.
CONFIGURINGInstance is being configured with software, drivers, networking, and storage.
RUNNINGInstance is ready for use.
DESTROYINGInstance shutdown in progress, resources are being deallocated.
DESTROYEDInstance has been successfully terminated.
FAILEDSystem-level problem (e.g., hardware failure, no capacity).

Cluster Status Priority

A cluster’s status is determined by the highest-priority status among its instances. Priority order (highest to lowest):
  1. FAILED - Any failed instance causes the cluster to show as failed
  2. DESTROYING - Cluster is being torn down
  3. PENDING - Instances are waiting to be provisioned
  4. PROVISIONING - Instances are being provisioned
  5. CONFIGURING - Instances are being configured
  6. RUNNING - All instances are running
  7. DESTROYED - All instances have been terminated
For example, if a cluster has 3 instances where 2 are RUNNING and 1 is CONFIGURING, the cluster status will show as CONFIGURING.

Next Steps