The conversation about ChatGPT and large language models (LLMs) is expected to continue as research and development in the field advances. At their core, LLMs are made up of trainable variables that absorb the essence of language through exposure to vast datasets. These models utilize deep learning techniques and transformer architectures to process and analyze data, generating coherent and contextually relevant text. Various LLMs, such as GPT-3, BERT, Bard, PaLM 2, T5, LaMDA, and Turing NLG, are being integrated with domain-specific technologies to provide solutions in machine translation, sentiment analysis, and more. Large language models make search results more accurate, too, and are expected to revolutionize the many language-based realms they come into contact with.