DeepChat's 3-step training example was used to benchmark the performance of NVIDIA H100 SXM5 and A100 SXM4 instances on Lambda's cloud computing platform. The tests showed that the H100 SXM5 instance outperformed the A100 SXM4 instance in terms of speed, with a speedup of between 2.5x-3.1x compared to the A100 SXM4 system. This is attributed to the use of NVIDIA's Tensor Core GPUs and rail-optimized networking on Lambda's cloud clusters, which deliver unprecedented performance and scalability on thousands of GPUs.