Vector search didn't work for RAG solutions because text embeddings often struggle with context sensitivity, contextual meaning, and evolving language use. The retrieved content based on embedding-based similarity search methods may impact the accuracy and correctness of generation in Large Language Models (LLMs). Challenges include context sensitivity, unrelated noise, reasoning of simple maths, information integration, negative rejection, conflicting knowledge detection, and counterfactual robustness. Some challenges can be tackled by finetuning a domain-specific embedding model or using advanced retrieval strategies to combine vector search with other search techniques. Specific test cases and evaluation metrics are needed for RAG solutions to address these limitations.