Many people and organizations mistrust predictions generated by AI and ML Systems, because they are considered to be a black box. There is a burgeoning discipline around the concept of making AI explainable. This can be as simple as providing a stack-ranked list of features and explaining the importance of those features for the predictive power of the models. Or it could be a report explaining how a given customer was accepted or turned down for a loan and why. It is important to note that not all algorithms and models that could be used may provide the same level of transparency or explainability. That could ultimately impact the choice of the model selected for a given use case or application. Organizations may (and probably should) choose that combination of model/algorithm that is maybe a tiny bit less accurate at predicting the target variable but that does so in a way that is as unbiased as possible, vs. always optimizing for “highest accuracy” regardless of bias.