Vectara's RAG-as-a-service provides an easy-to-use platform for building RAG applications, but measuring its performance and quality can be challenging. To address this, Ofer Mendelevitch introduces RAGAs, an open-source framework for evaluating RAG pipelines, which includes metrics such as faithfulness, answer similarity, answer relevancy, and answer correctness. The author demonstrates how to use RAGAs with Vectara's RAG-as-a-service, generating synthetic data using the tool and running evaluations on a test dataset. By optimizing parameters such as lambda, MMR, and prompt names, the author shows that changing retrieval and generation settings can significantly improve answer correctness, highlighting the importance of measuring and optimizing RAG pipeline performance.