Model Explainability is a crucial field in machine learning as it helps understand how models work and identify hidden biases or misclassifications. Class Activation Mapping (CAM) methods, such as GradCam, are algorithms that produce heatmaps on images to show where the model focuses for specific classes. These methods can be used with FiftyOne to gain a better understanding of models' predictions and spot potential biases. By leveraging CAM methods, we can improve our models' performance and ensure they focus on relevant features rather than environmental factors.