According to Salesforce’s Ethical Leadership and Business Report, 90% of consumers believe that companies should have a responsibility for improving the state of the world. However, 80% of AI-focused executives are struggling to establish processes to ensure responsible AI use. With the billions of datasets available for researchers to utilize, ensuring their models remain hypervigilant of human rights is a confounding challenge. As pioneers on the frontline of innovation with machine learning and data technology, it is the job of AI researchers and executives to consider core principles of mankind, such as empowerment and inclusivity as we develop the tools of the future.
In this session, Yoav Schlesinger, Architect of Ethical AI Practice at Salesforce and a leader driving strategic initiatives in AI and emerging technologies—such as the responsible deployment of AI—will discuss Salesforce’s maturity model for building an ethical AI practice. This four-step process follows the AI development lifecycle, from ideation to implementation, ensuring that there are the right resources to review, test, and mitigate potential risks at every step before launch. Yoav will explain how AI applications are in a perpetual state of practice, as they are on a trajectory of continuous improvement, as well as actionable ways to safeguard human rights, protect the data organizations are trusted with and respect the societal values of everyone.