A Simple Adjustment Improves Out-of-Distribution Detection for Any Classifier
This article presents a novel and simple adjustment to model predicted probabilities that can improve Out-of-Distribution (OOD) detection with classifier models trained on real-world data. The approach is based on theory and runs in just a couple of lines of code. It involves adjusting the model's predicted probabilities using class confident thresholds, which are calculated from the training data. This adjusted OOD detection procedure remains extremely simple and easy to implement in practical deployments. Experimental results show that this method improves the performance of both Entropy and MSP-based out-of-distribution detection scores.
Company
Cleanlab
Date published
Oct. 19, 2022
Author(s)
Ulyana Tkachenko, Jonas Mueller, Curtis Northcutt
Word count
1523
Language
English
Hacker News points
None found.