Predicting the performance of deployed models on new, unlabeled data is a challenging task that can be addressed using a novel model certainty visualization technique. This approach generates lower-dimensional representations of data using actual full model outputs, enabling quick understanding of model output reliability and its correlation with accuracy. By visualizing these representations using UMAP, researchers can directly see how new data is structured in terms of deployment model's outputs, identifying regions of high and low model certainty. The technique does not require labels for data and has been validated through experiments on 190 uniquely-trained models, showing a strong correlation between model certainty and accuracy. This visualization tool can help AI developers quickly identify difficult, high-value examples that need to be added to training sets to improve model performance.