Lambda is collaborating with NVIDIA to deploy the latest NVIDIA accelerated computing solutions, including the NVIDIA GB200 Grace Blackwell Superchip and NVIDIA B200 and B100 Tensor Core GPUs. These solutions will be available through Lambda's On-Demand & Reserved Cloud services, offering machine learning teams access to the latest NVIDIA GPUs in any compute modality. The NVIDIA GB200 Grace Blackwell Superchip is a powerful system that combines two Blackwell GPUs and one NVIDIA Grace CPU, delivering 1.4 exaFLOPS of AI performance and 30TB of fast memory. The GPU offers significant improvements over previous architectures, including 30X faster real-time LLM inference and 4X faster training performance for large language models. Additionally, the NVIDIA Blackwell architecture includes six revolutionary technologies that enable organizations to build and run real-time inference on trillion-parameter large language models. Accelerated networking platforms, such as NVIDIA Quantum-X800 InfiniBand and Spectrum-4 Ethernet switches with NVIDIA BlueField-3 DPU platforms, are also being used to scale across the data center.