Retrieval Augmented Generation (RAG) systems, which integrate external information sources to enhance LLMs' capabilities, struggle with the inherent ambiguity of human language, particularly sarcasm. This can lead to misinterpretations and inaccurate responses, hindering their reliability in real-world scenarios. The article explores this challenge and proposes a novel solution: Reading with Intent. It involves prompting LLMs to recognize emotional intent behind the text and incorporating binary tags that indicate whether a passage is sarcastic or not. Experiments demonstrate significant improvement in LLMs' performance in answering questions over sarcasm-laden text, across various LLM families. Future directions include enhancing sarcasm detection, exploring multi-class intent tags, and instruction-tuning for better understanding of emotionally charged language.