The article discusses the surprising limitations of large language models (LLMs) in spelling, using Google's Bard as an example. Despite their impressive text-generating capabilities, these AI models struggle with simple tasks like counting letters in a word. This is because LLMs generate responses based on patterns observed in vast text data rather than querying a database of verified facts. The article also highlights the inherent limitations of advanced AI models and emphasizes that they remain language-based, lacking an understanding of spatial concepts or multi-sensory context. To improve their ability to understand and generate language, future AI development should focus on creating general-purpose models trained through Reinforcement Learning, capable of learning by themselves without relying solely on provided data.