LLM progress has been slow since GPT4 was trained several years ago, primarily due to a stagnation of the compute allocated to model training. However, new, dramatically more powerful model training clusters are being built, with 31 times more power than trained GPT4, which could lead to significant improvements in performance. These new clusters will be incredibly expensive and require massive amounts of energy, raising questions about their value as investments for companies developing them. The future development of more powerful models is likely to be hindered by data quality and availability issues, as well as diminishing returns from additional compute.