The Model’s Shipped; What Could Possibly go Wrong?
Model observability plays a crucial role in detecting, diagnosing, and explaining regressions in deployed machine learning models. Some potential failure modes include Concept Drift, where the underlying task of a model changes over time; Data Drift or Feature Drift, which occurs when the distribution of model inputs changes; Training-prod skew, where the distribution of training data differs from production data; and Cascading Model Failures, which happen when multiple models are interconnected. Additionally, Outliers can be problematic as they may represent edge cases or adversarial attacks on a model. Monitoring tools help identify these issues and enable teams to improve their models after deployment.
Company
Arize
Date published
Feb. 22, 2021
Author(s)
Aparna Dhinakaran
Word count
1564
Language
English
Hacker News points
None found.