Lauren Kornutick on ChatGPT Compliance Risks [Podcast]

0
284

By Adam Turteltaub

ChatGPT is, like the movie title, seemingly everywhere, all the time, and all at once. Individuals and corporations have rushed to embrace it, sometimes with great results, other times, not so much.

For better or worse, ChatGPT and other AI-driven solutions are here to stay, and with it comes a host of new risks to manage. In this podcast, Lauren Kornutick, Director Analyst, Legal and Compliance at Gartner shares the findings of recent research the firm conducted on ChatGPT.

They found several risks for compliance teams to focus on:

  1. Fabricated and inaccurate answers. As with the case of the lawyer linked to above, ChatGPT sometimes make things up because it was trained on inaccurate material of it was unable to understand the context of the question.
  2. IP Risks. Employees may not understand that once data is put into an open source tool it becomes part of the public domain. That means more training on how to protect IP in the new AI era.
  3. Often the data set used to train the AI relies on data that is biased. A human review is absolutely essential to ensure that existing biases aren’t furthered.
  4. Fraudsters are particularly adept at finding nefarious uses for new technology.
  5. Consumer Protection. Some states require that it be made clear when consumers are interacting with a person, and when they are interacting with a bot. The FTC has also stressed that AI needs to be transparent, accountable and empirically correct.

Listen in to learn more about how to protect your organization from the risks of ChatGPT. Be sure, too, to check out the press release. Gartner subscribers can learn more detail by accessing “What Legal and Compliance Leaders Need to Know About Large Language Model Risks.”