TWIMLcon 2019


Why and How to Build Explainability into your ML Workflow


Building AI applications comes with significant business risks, e.g. nearly half of companies complain about lack of trust in AI. There have been several instances of companies deploying AI at scale only to roll it back after serious negative PR due to bias and trustworthiness issues. Governments have begun to introduce regulations for automated decisions and fines for non compliance can be hefty. Explainable AI is a way for companies to deal with business risks associated with deploying AI in use cases like underwriting loans, moderating content, providing job recommendations, etc.

Explainable AI helps ML teams understand model behavior and predictions. This fills a critical gap in operationalizing AI in verticals like FinTech (e.g. explaining ML-flagged fraud transactions), insurance (e.g. explaining policy underwriting decisions), banking (e.g. explaining loan denial by ML models), logistics (e.g. explaining predicted marketplace variations), and more. Considering explainability when operationalizing AI allows you to integrate it into the end-to-end ML workflow from training to production, which offers benefits such as the early identification of biased data.

Session Speakers

CEO and Founder

Oops, please Login or Create Account to view On Demand

The good news is that it's both easy and free to register and get access.

Account Login

Create Account

Newsletter Consent(Required)
Terms and Privacy Consent
This field is for validation purposes and should be left unchanged.