Five Essential “Guard Rails” for Using ChatGPT

0
385

by Susan DiversLRN director of thought leadership and best practices

ChatGPT seems to have taken the world by storm in the past several months, generating intense media coverage, a million downloads a day, and dire predictions about job losses while raising a host of legal and ethical issues about its use. According to media reports, it is the fastest growing app of all time. But some leading companies, such as tech giants Apple and Amazon as well as others, have banned its use, concerned about the risks it entails. Others are calling on Congress to pass legislation for regulations to govern generative AI.

The official Wikipedia definition states that ChatGPT “is a large language model-based chatbot developed by OpenAI and launched on November 30, 2022, notable for enabling users to refine and steer a conversation towards a desired length, format, style, level of detail, and language used.” ChatGPT and similar programs such as Bard rely on the ability to “learn” from the inputs they receive and the vast information available on the worldwide web enabling it to generate emails, articles, write code, and accurately mimic human writing styles.

Given its popularity, companies are starting to grapple with the risks they face when employees use these apps and how they should manage them. Far from being uncharted territory or the Wild West from a legal perspective, many existing laws and regulations apply to these apps’ use as well as the standard compliance policies most companies have. Educating everyone about the risks and requirements of ChatGPT and other generative AI is necessary to avoid problems. Here are five essential areas where it’s wise to establish guardrails and train employees if you plan to use AI.

  1. Privacy and data protection: To take one example, the European Union’s GDPR directive requires all personal data to be processed fairly and transparently and provides rights to opt out of data collection. These requirements and other data privacy laws are challenging for using Chat GPT. Generative AI is designed to scour the web for relevant information and incorporate any inputs from users (for example, employee or patient information) into its dataset. As a result, companies that store facial images have been accused of using them without consent to “train” AI facial recognition software.
  2. AI bias and hallucinations: Generative AI’s ability to “learn” from its dataset and use algorithms to sort it is a double-edged sword. AI datasets can be skewed towards one group or another inadvertently. For example, a cancer research project using AI that relies on patient data from predominantly white participants could provide outcomes that are less helpful for people of color. If job candidate applications are loaded into AI for ranking, undisclosed AI algorithms that favor one group over another can create bias. A recent New York City law in 2021 imposed specific rules for hiring and promotion decisions. The city’s law requires companies using AI software in hiring or promoting to notify candidates that an automated system is being used and to audit such a system for algorithm bias. Moreover, generative AI that “guesses” the correct answer to a question has produced legal briefs that cite non-existent cases, an example of AI “hallucinations.” Using AI properly means ensuring accuracy and auditing AI programs for bias.
  3. Intellectual property rights: A major drawback of using ChatGPT or other AI to produce work is the murky nature of the underlying intellectual property rights. ChatGPT relies upon and absorbs data from the worldwide web and from the inputs made by users as part of synthesizing a generated output. That output, whether it’s a new software code, book, film, song, videogame, algorithm, or other product, may contain copyrighted material and the intellectual property rights for the output generated are unclear. On the flip side, putting proprietary, controlled or confidential information into ChatGPT as a prompt without disabling chat history, makes that material available for the app to use going forward for any purpose without restriction. Thus, users may inadvertently make proprietary or controlled technologies publicly available when using generative AI.
  4. IT security: Experts predict that phishing and other IT crimes such as fraud and identify theft will be significantly more difficult to detect as algorithms can mimic faces and voices, and even “learn” a person’s personality and writing or speaking style. Increasing and upgrading IT security to deal with these enhanced risks is essential. Additionally, deceptive business practices could be facilitated by the use of AI to produce credible fakes. The more personal data and content that is made available to ChatGPT, the greater the chance of enhanced phishing and deep fakes.
  5. Transparency and contract compliance: Best practices, according to regulators including the Federal Trade Commission, focus on being fully transparent about its function and purpose when using generative AI—particularly when dealing with consumers. Moreover, using generative AI creates the potential to breach contracts that identify team members that will perform the work. If generative AI is then used to perform work without informing the customer and amending the contract, a company can find itself in breach of contract terms. Disclosing any proposed use of AI in contract performance and, on the flip side, including contact clauses that require vendors to fully identify the use of any AI is a prudent practice.

While generative AI has huge potential, using it carefully and mitigating risks as part of a good compliance program is essential. The AI Risk Management Framework, released January 26 by the National Institute of Standards and Technology, offers the most comprehensive approach to date that companies can use to assess and manage the myriad risks associated with the implementation or development of AI.