We inspect the ImageNet dataset, a popular computer vision dataset used today, and quickly find data quality errors while training a model. The dataset contains 1000 different classes with over 1.2 million samples, but it has rarely been updated since its release in 2012. We use Galileo to debug one of the most cited datasets in AI today and find tons of errors, including mislabeling of images such as 'tigers' as 'tiger cats'. The dataset's limitations are highlighted by our findings on class imbalance and the need for augmentation methods and more data to improve model performance. We also identify gaps in training datasets that may need to be patched before deploying a model in production. Our analysis shows that simple mistakes can have drastic effects on training and performance estimation, emphasizing the importance of data quality and robustness.