The growing complexity of machine learning models has made it increasingly difficult to understand why a model makes certain predictions, especially as these predictions can have significant impacts on our lives. Explainability is a technique designed to determine which features led to a specific model decision. It does not explain how the model works but offers a rationale for human-understandable responses. This piece aims to highlight different explainability methods and demonstrate their incorporation into popular ML use cases.