The authors of a recent blog post have released a fine-tuned Neo4j Text2Cypher (2024) model, which demonstrates the potential benefits of fine-tuning foundational models on the Neo4j Text2Cypher (2024) Dataset. The dataset is used to translate natural language questions into Cypher queries, and the authors found that fine-tuning techniques can significantly improve performance over baseline models. The best-performing fine-tuned model achieved improvements in both translation-based Google BLEU score and execution-based ExactMatch score, outperforming its baselines by a considerable margin. However, the authors also caution against potential risks and pitfalls associated with fine-tuning, including changes in data distribution and access to training and test sets. Overall, the release of this fine-tuned model highlights the potential for Neo4j Text2Cypher (2024) tasks to be enhanced through fine-tuning techniques.