Understanding and Mitigating Risks of Using ChatGPT and Other AI Systems

Large language models (LLMs) like ChatGPT and other forms of artificial intelligence (AI) have the power to revolutionize many critical tasks in the investment management space, including research, marketing, trading and compliance. Although they have made quantum leaps in their ability to digest information and generate reports and other content, they are imperfect systems that may pose significant risks to firms that use them. A recent ACA Group program examined the potential uses of ChatGPT and other LLMs, the associated risks, growing regulatory concerns and how firms can mitigate the risks of using LLMs. The program featured Raj Bakhru and Michael Abbriano, chief strategy officer and managing director at ACA Group, respectively; and Greg Slayton, director at ACA Aponix. This article distills their insights. See our three-part series on new AI rules: “NYC First to Mandate Audit” (Jul. 28, 2022); “States Require Notice and Records, Feds Urge Monitoring and Vetting” (Aug. 4, 2022); and “Five Compliance Takeaways” (Aug. 18, 2022).

To read the full article

Continue reading your article with a HFLR subscription.