A vector database is used to help add context to generative AI application prompts, and can be used with LangChain, a popular GenAI framework, to build more accurate applications in less time. Using a local vector database can save money and simplify setting up a local dev stack, but requires careful management to avoid expensive cost overruns. LangChain supports composing calls to large language models (LLMs) with other AI app components using a simple programming syntax, and offers multiple components for retrieval-augmented generation (RAG). A vector database converts data into mathematical vector embeddings, allowing for searching approximate matches in a multi-dimensional vector space. Developers can use Docker containers to spin up local vector database instances, or opt for a cloud-hosted service like Amazon Web Services or Astra DB. Local development has challenges, including the need for a transition plan and potentially requiring too many resources to load and run large datasets. An alternative to local development is using a serverless vector database like Astra DB, which provides an affordable option with seamless integration with LangChain via the Astra DB connector.