AI Compliance Playbook: Traditional Risk Controls for Cutting‑Edge Algorithms

The corporate use of artificial intelligence and machine learning (AI/ML) skyrocketed during the coronavirus pandemic, with organizations across the economy investing in algorithmic tools to boost capabilities and tackle major problems. Public questions about misuse have accelerated too, with one in the center spotlight: Is your algorithm fair? Companies have responded by adopting AI/ML ethics policies. Yet, leaders from these firms contend that they now need more than an ethics policy and are advocating governmental regulation. This first article in a three‑part series seeks to fill in the current regulatory gap by describing essential steps for an AI/ML compliance program, including adapting 1970s anti‑discrimination practices to meet the future. It also reports on data scientists’ recent compilations of AI/ML failures. The second article will detail seven AI/ML risks that entities now face, and the third article will discuss both cutting-edge AI auditing and the venerable “three lines of defense” approach. See our three-part series on new AI rules: “NYC First to Mandate Audit” (Jul. 28, 2022); “States Require Notice and Records, Feds Urge Monitoring and Vetting” (Aug. 4, 2022); and “Five Compliance Takeaways” (Aug. 18, 2022).

To read the full article

Continue reading your article with a HFLR subscription.