Lambda Cloud has now introduced on-demand HGX H100 systems with 8x NVIDIA H100 SXM Tensor Core GPU instances, offering more flexibility for users looking to build and fine-tune generative AI models. This new addition provides significantly more compute power, enhanced scalability, high-bandwidth GPU-to-GPU communication, and optimal performance density compared to the previously available 1x H100 PCIe instance. The HGX H100 systems are ideal for larger-scale tasks, with features including 80GB vRAM per GPU, 220 vCPUs, 1.8 TB RAM, and 24.3 TiB NVMe SSD storage, making them suitable for training foundation models and LLMs. Lambda Cloud Clusters also offer access to these GPUs, compute power, high-bandwidth networking, and parallel storage at a lower cost than on-prem hardware infrastructure. The company plans to continue adding more cloud capacity and launching features to make Lambda the best cloud in the world for training AI.