This post discusses the creation of a Conversational Retriever Augmentation Generator (RAG) application without using OpenAI. The tech stack includes LangChain, Milvus, and Hugging Face for embedding models. The process involves setting up the conversation RAG stack, creating a conversation, asking questions, and testing the app's memory retention. The example demonstrates how to use Nebula, a conversational LLM created by Symbl AI, in place of OpenAI's GPT-3.5.