The article explores a comparative analysis of the Mistral 7B model, an alternative to OpenAI’s GPT models and BAAI models in the context of Retrieval Augmented Generation (RAG) applications. RAG pipelines enhance LLMs by providing them with external 'research' to inform their responses. The article delves into understanding RAG pipelines, open-source AI models, and introduces the Mistral 7B model. It also discusses BGE embedding, a general Embedding Model pre-trained using retromae that can be fine-tuned. A methodology of comparative analysis is presented, and results are discussed in terms of context quality and text generation quality. The article concludes by stating the importance of choosing the right embedding model based on specific application requirements and highlights the potential of newer models like Mistral 7B to close the gap with OpenAI's GPT models.