The National Institute of Standards and Technology (NIST) aims to create a plan for Federal engagement that supports reliable, robust, and trustworthy artificial intelligence technologies by establishing a common technical standard. However, AI must be guided not only by technical standards but also by ethical standards, as context is crucial in understanding the peripheral information relevant to specific AI applications. Graph technology is proposed as a state-of-the-art method for adding and leveraging context from data, providing a powerful foundation for AI. By incorporating context, AI can make human-like decisions that are more situationally appropriate, ensure explainability and transparency of decisions, and prevent biased outcomes. The lack of context in AI standards results in subpar outcomes, uninterpretable predictions, and less accountability, as seen in cases like Microsoft's Twitter bot Tay learning offensive language and Amazon's AI-powered recruiting tool showing bias against women candidates. Understanding the larger frame of reference is critical for identifying whether an AI project has gone off-course, and explainability is essential for accountability, particularly in nuanced situations.