IBL News | New York
The output generated by ChatGPT and other LLMs presents legal and compliance risks that every organization has to face or face dire consequences, according to the consultancy firm Gartner, Inc, which has identified six areas.
“Failure to do so could expose enterprises to legal, reputational, and financial consequences,” said Ron Friedmann, Senior Director Analyst at Gartner Legal & Compliance Practice.
- Risk 1: Fabricated and Inaccurate Answers
ChatGPT is also prone to ‘hallucinations,’ including fabricated answers that are wrong, and nonexistent legal or scientific citations,” said Friedmann.
Only accurate training of the robot with limited sources will mitigate this tendency to provide incorrect information.
- Risk 2. Data Privacy and Confidentiality
Sensitive, proprietary, or confidential information used in prompts may become a part of its training dataset and incorporated into responses for users outside the enterprise if chat history is not disabled,
“Legal and compliance need to establish a compliance framework and clearly prohibit entering sensitive organizational or personal data into public LLM tools,” said Friedmann.
- Risk 3. Model and Output Bias
“Complete elimination of bias is likely impossible, but legal and compliance need to stay on top of laws governing AI bias and make sure their guidance is compliant,” said Friedmann.
“This may involve working with subject matter experts to ensure output is reliable and with audit and technology functions to set data quality controls,” he added.
- Risk 4. Intellectual Property (IP) and Copyright risks
As ChatGPT is trained on a large amount of internet data that likely includes copyrighted material, its outputs – which do not offer source references – have the potential to violate copyright or IP protection.
“Legal and compliance leaders should keep a keen eye on any changes to copyright law that apply to ChatGPT output and require users to scrutinize any output they generate to ensure it doesn’t infringe on copyright or IP rights.”
- Risk 5. Cyber Fraud Risks
Bad actors are already using ChatGPT to generate false information at scale, like fake reviews, for instance.
Moreover, applications that use LLM models, including ChatGPT, are also susceptible to prompt injection, a hacking technique in which
A hacking technique known as “prompt injection” brings criminals to write malware codes or develop phishing sites that resemble well-known sites.
“Legal and compliance leaders should coordinate with owners of cyber risks to explore whether or when to issue memos to company cybersecurity personnel on this issue,” said Friedmann.
- Risk 6. Consumer Protection Risks
Businesses that fail to disclose that they are using ChatGPT as a customer support chatbot run the risk of being charged with unfair practices under various laws and face the risk of losing their customers’ trust.
For instance, the California chatbot law mandates that in certain consumer interactions, organizations must disclose that a consumer is communicating with a bot.
Legal and compliance leaders need to ensure their organization’s use complies with regulations and laws.