Lambda is expanding its cloud offerings to include access to NVIDIA H200 Tensor Core GPUs through Lambda Cloud Clusters, a dedicated GPU cluster service designed for machine learning teams. This collaboration enables customers to utilize the high-performance GPUs, networking, and storage required for large-scale distributed training. The NVIDIA H200 GPU offers nearly double the memory capacity of its predecessor, providing an optimal level of HBM3e memory that delivers the highest-performance model parallelism for large language models and generative AI. Additionally, the GPU's unmatched memory bandwidth of 4.8TB/s is critical for handling growing data sets and model sizes. With this expansion, Lambda customers can now access the fastest and most effective cloud infrastructure to power their largest and most demanding AI training projects.