Statistical distance metrics are used to quantify the distance between two distributions, which is extremely useful in machine learning observability. Data problems can arise from sudden data pipeline failures or long-term drift in feature inputs, and statistical distance measures provide teams with an indication of changes in the data affecting a model and insights for troubleshooting. Real-world examples include incorrect data indexing mistakes, bad text handling, and software engineering changes that alter the meaning of a field. These issues can be caught using statistical distance checks on model inputs, outputs, and actuals, which allow teams to get in front of major model issues before they affect business outcomes. The reference distribution can be fixed or moving, depending on what is being tried to catch, and different types of distance checks are valuable for catching different types of issues. The PSI metric is a great example of a statistical distance measure with real-world applications in the finance industry, particularly for detecting changes in feature distributions that might make them less valid as inputs to models.