The author of the text discusses their experience with Generative AI and the challenges they faced while building an LLM-based system. They share their learnings on how to improve the performance of LLMs, particularly in handling complex data sources such as long documents and tables. The key takeaways include the importance of clean data representation, model attention being limited, and the need for a well-designed prompting strategy. The author also highlights the limitations of current LLMs in handling dates and nuances in text representations. They propose using GPT-3.5 or Claude with their huge context window to improve performance on long documents. Additionally, they discuss the importance of human-computer symbiosis in building effective AI systems and share their experience with creating a custom "AI expert" that can be used in various applications.