Skip to main content
Clusters are ideal for interactive development, debugging, and building new projects. If you have working code and want to run experiments, Jobs are recommended instead.
Make sure you’ve installed the TensorPool CLI and configured your API key.

Create Your First GPU Cluster

Create a 1xB200
tp cluster create 1xB200
For multi-node training, create a 4-node 8xB200 cluster:
tp cluster create 8xB200 -n 4
You can also create a cluster with a pre-built container image (includes CUDA, Python, and ML libraries):
tp cluster create 1xB200 --container pytorch
See instance types for all available GPU configurations and container images for available images.

Check Your Cluster Status

The tp cluster create command will give you a cluster ID (e.g., c-abc123). Use it to check your cluster’s status:
tp cluster info <cluster_id>
Wait until the status shows RUNNING. The output will list your cluster’s instances, each with an instance ID (e.g., i-xyz789)
If you lose the cluster ID, you can always find it with tp cluster list

SSH Into Your Cluster

Once your cluster status is RUNNING, grab the instance ID from tp cluster info and connect:
tp ssh <instance_id>
For multi-node clusters, SSH into the jumphost instance first. From there, you can access worker nodes by name (e.g., ssh <cluster_id>-0).

Clean Up

When you’re done, destroy your cluster:
tp cluster destroy <cluster_id>

Next Steps