Text-based AI systems like LLMs have a significant vulnerability: it's challenging to distinguish between content and instructions. This issue can lead to security concerns, as users may manipulate the system into revealing sensitive information. One solution is "prompt injection," which limits the amount of information an LLM can access based on user permissions. OpenAI has implemented this functionality through its Function Calling feature since June 2023. By integrating Algolia's search capabilities, developers can enable AI support agents to access general news and personal order information while ensuring data security.