The text compares the cost of buying vs. renting a cloud GPU server, specifically focusing on Deep Learning workloads. A server with similar hardware to AWS's p3dn.24xlarge is selected for comparison, which is a Tesla V100 Server from Lambda Hyperplane. The study finds that the purchased Tesla V100 Server is 2.6% faster than the AWS p3dn.24xlarge for FP32 training and 3.2% faster for FP16 training. Additionally, the TCO (Total Cost of Ownership) analysis shows that the Hyperplane on-prem server has a lower total cost compared to the p3dn.24xlarge over a 3-year period, with savings ranging from $69,441 to $184,008. The study highlights the benefits of managing software and hardware in-house, as well as the reduced costs associated with purchasing a server upfront versus renting it on AWS. The results also suggest that while cloud services offer ease of use for real-time applications, Deep Learning workloads may not benefit from these advantages.