Frontier AI teams can now deploy high-performance GPU clusters in minutes, thanks to Together Instant GPU Clusters accelerated by up to 64 NVIDIA GPUs per cluster. This self-service solution offers flexible deployment options, including Kubernetes or Slurm for workload orchestration, and allows users to choose their own NVIDIA Driver and CUDA versions. The service also provides a simple and transparent pricing structure, with competitive costs for high-performance compute. With instant provisioning in minutes, users can skip lengthy approvals and procurement cycles, deploy NVIDIA GPUs instantly without waiting for sales conversations or capacity planning, and accelerate AI workloads with ultra-low-latency, high-throughput performance.