- High aggregate performance: Up to 300 GB/s aggregate read throughput, 150 GB/s aggregate write throughput, 1.5M read IOPS, 750k write IOPS
- POSIX compliant: Full filesystem semantics
- Fixed volume size: Volume size must be defined at creation and can be increased at any time. See pricing for details.
- Ideal for: datasets for distributed training, storing model checkpoints
| Metric | Performance |
|---|---|
| Read Throughput | 11,000 MB/s |
| Write Throughput | 5,000 MB/s |
| Read IOPS | 10,000 |
| Write IOPS | 4,500 |
| Avg Read Latency | 2ms |
| Avg Write Latency | 6ms |
| p99 Read Latency | 8ms |
| p99 Write Latency | 20ms |
Shared storage volumes can only be attached to multi-node GPU clusters and CPU instances. For more flexible storage, use TensorPool Object Storage (S3-compatible).
Quick Start
Core Commands
tp storage create <size_gb>- Create a new storage volumetp storage list- View all your storage volumestp cluster attach <cluster_id> <storage_id>- Attach storage to a clustertp cluster detach <cluster_id> <storage_id>- Detach storage from a clustertp storage destroy <storage_id>- Delete a storage volume
Creating Storage Volumes
Create storage volumes by specifying a size in GB:Attaching and Detaching
Attach storage volumes to a cluster:Storage Locations
When you attach a storage volume to your cluster, it will be mounted on each instance at:Storage Statuses
Storage volumes progress through various statuses throughout their lifecycle:| Status | Description |
|---|---|
| PENDING | Storage creation request has been submitted and is being queued for provisioning. |
| PROVISIONING | Storage has been allocated and is being provisioned. |
| READY | Storage is ready for use. |
| ATTACHING | Storage is being attached to a cluster. |
| DETACHING | Storage is being detached from a cluster. |
| DESTROYING | Storage deletion in progress, resources are being deallocated. |
| DESTROYED | Storage has been successfully deleted. |
| FAILED | System-level problem (e.g., no capacity, hardware failure, etc.). |
Best Practices
- Data persistence: Use storage volumes for important data that needs to persist across cluster lifecycles
- Shared data: Attach the same storage volume to multiple clusters to share data
- Object storage for archives: For cost-effective persistent storage without size limits, see Object Storage
Next Steps
- Review best practices for storage workflows
- See the CLI reference for detailed command options