This article presents a novel and simple adjustment to model predicted probabilities that can improve Out-of-Distribution (OOD) detection with classifier models trained on real-world data. The approach is based on theory and runs in just a couple of lines of code. It involves adjusting the model's predicted probabilities using class confident thresholds, which are calculated from the training data. This adjusted OOD detection procedure remains extremely simple and easy to implement in practical deployments. Experimental results show that this method improves the performance of both Entropy and MSP-based out-of-distribution detection scores.