In a world where companies are integrating AI into their fabric, the landscape of databases is witnessing the emergence of vector databases, which are transforming the way data is handled. Vector libraries and databases are becoming increasingly popular as companies navigate this digital odyssey, driven by the meteoric rise of generative AI and Large Language Models (LLMs). Retrieval Augmented Generation (RAG) is a software pattern that relies on meaning-based search to turn raw data into vectors, making it easier for AI's grasp. Companies are now faced with a dizzying set of choices on how to build enterprise generative AI applications using vector stores, which come in three broad categories: vector libraries, vector-only databases, and enterprise databases that also support vectors. Vector libraries, such as FAISS, NMSLIB, ANNOY, ScaNN, offer efficient similarity search and clustering capabilities but lack comprehensive database functionalities. Vector-only databases like Pinecone, Weaviate, Milvus, ChromaDB, Qdrant, and Vespa are designed for scalable, high-performance similarity search in applications like recommendation systems and AI-powered search. Enterprise databases with vectors, such as Elasticsearch, MongoDB, SingleStoreDB, Supabase, Neo4J, Redis, and PostgreSQL, offer broader data handling capabilities, versatility in RAG, and real-time computation and data serving capabilities. Companies need to evaluate these options based on their specific use cases, including multiple data type support, search methodologies, data freshness and latency, transactional or analytics use cases, prototype to production, and other requirements. With the emergence of vector databases, developers are now faced with a new set of choices for building generative AI applications at enterprise scale.