Law.com Subscribers SAVE 30%

Call 855-808-4530 or email [email protected] to receive your discount on a new subscription.

ChatGPT, Generative AI and IP

By Dan Felz, Wim Nauwelaerts, Paul Greaves and Josh Fox
April 01, 2023

Corporate legal departments are increasingly receiving requests from business clients to use ChatGPT or similar AI-powered tools in their operations. These requests can be urgent, with business clients demanding enablement from legal. This article is in two parts: Part One briefly details what "generative AI" tools like ChatGPT are and provides an overview of key legal considerations, including by looking forward to upcoming AI-specific legislation in the EU and the U.S.; and Part Two, coming next month, will outline potential ways for corporate counsel to think about enabling engagement with this new technology.

|

What is 'Generative AI'?

ChatGPT is one of a suite of AI-powered technologies that is being dubbed "generative AI." These are tools that can take a prompt or query from a user (the "input") and respond to it with a type of "output" that resembles what a human would create. These tools are referred to as "generative" because they do not rely on a database of preformulated answers or responses that they can retrieve to address user input. Instead, they have been trained to "recognize" a user's input and to "generate" a response entirely on their own.

Some of the more well-known examples of generative AI include:

|
  • ChatGPT. ChatGPT is — in simplified terms — a powerful chatbot. It is a "large language model" powered by a neural network that can: a) receive natural-language input from a user; and b) provide natural-language output that resembles how a human would respond. ChatGPT is operated by the company OpenAI. |
    • Generative AI for Images. There are also generative AI tools that autonomously create images. OpenAI operates DALL-E a tool that creates images from a natural language description. The tool "Stable Diffusion" is similar, creating images automatically in response to natural-language inputs from users.
    • Generative AI for Coding. Generative AI tools also assist with creating computer code. OpenAI's Codex uses an AI model to generate computer code in response to user input. Codex also powers Github's "Copilot" functionality, which suggests code to programmers in real time in response to the code they are already creating.
|

Common Generative AI Use Cases

Generative AI tools are not restricted to any particular use case. But requests to corporate legal departments seem to be presently coalescing around several specific use cases:

|
  • Coding Assistant. ChatGPT is reportedly a serviceable coder. It may not be able to create "ready-to-deploy" code, but it can take a goal ("Write a C++ script that does X") and generate a workable first draft. Generative AI tools like ChatGPT, Codex, or Copilot can potentially save hours of coding time per job — and thus be valuable to architects, developers, and others who are tasked with shipping products and features on deadlines.
  • Content Creation. Generative AI can quickly create a wide variety of content. For example, ChatGPT can write draft copy for Sales, Marketing, or Comms. Similarly, image generators like DALL-E or Stable Diffusion can quickly generate a series of images that could be used to mock up initial versions or features of mobile apps, games, or other visual-heavy products or services.
  • Document Drafting. ChatGPT can draft documents upon request. For example, it could draft policies for HR. It also drafts legal documents if requested (even though these come caveated, stating that a lawyer should be consulted). Again, ChatGPT's output is not necessarily ready-to-use, and its quality has not yet been comprehensively reviewed. But, it could conceivably be viewed as a sort of virtual "draughtsman" able to generate initial drafts for review.
  • Customer Support. AI-powered chatbots are already a feature of customer service. Some AI-powered chatbots still rely on a limited set of preformulated answers. ChatGPT does not; it generates answers that do not exist prior to a query, based specifically on what it is asked. Customer support departments are thus evaluating whether ChatGPT can be used to improve operations. Some companies may be considering how to integrate ChatGPT directly into customer-facing interactions. Still, even if ChatGPT isn't used to generate responses directly to customers, some companies may be considering how it can streamline support operations. For example, ChatGPT could quickly review the history of a customer's prior interactions with a company — then summarize this in 3-4 bullet points for a customer service rep.
|

Legal Considerations When Using Generative AI

While business clients may be seeing benefits from using generative AI, corporate counsel tends to focus on legal risks that may arise from permitting enterprise use. The risks of generative AI are still being discovered, so this advisory cannot present an exhaustive, closed-ended list of considerations that may be relevant to counsel. At present, however, reporting has identified several relevant considerations. Some of the more salient are:

|
  • Who Has the Contract with the Generative AI Provider? Is the Contract Workable? Generative AI providers tend to make their products easy to sign up for and use. For example, to use ChatGPT, one needs only to go to OpenAI.com and create an account – after that ChatGPT can be used for free, subject to OpenAI's Terms of Use. This ease of sign-up and use of standard terms can blur who has a contract with the generative AI provider. Employees can sign up for generative AI tools, without procurement or purchasing functions knowing about it, bypassing corporate due diligence and contracting processes. Use may be partially for business purposes, partially for personal use. This issue occurs across generative AI providers — the tools are readily available to employees who create accounts outside of corporate contracting procedures.
  • Confidentiality. Generative AI is "self-learning" technology. That means that what is input into the AI tool — along with outputs and follow-up responses — can be ingested into the tool's AI model to improve its operation. Employees' inputs into generative AI tools may thus become part of the tool itself, and thus start reappearing to other users outside your organization. This improvement-centered model creates confidentiality concerns. Companies should assume that everything that is input into a generative AI tool, and everything that the tool outputs in response, will be available to others outside your company. |
    • ChatGPT may serve as an example. Per OpenAI's TOUs, both the inputs into ChatGPT and the outputs from ChatGPT may flow back into ChatGPT "to improve our models." Of course, the TOUs also indicate there is an "opt-out" option — organizations can "opt-out of having Content used for [model] improvement" by sending an "organization ID" to OpenAI's support email address. However, there appears to be as yet no publicly-available information on the effect and extent of this opt-out, so placing organizational reliance on the opt-out may be premature.
    • Confidentiality concerns are already reportedly driving corporate decisions. Amazon reportedly prohibited its employees from inputting confidential information — including code — into a generative AI tool. Apparently, Amazon was beginning to receive output that resembled Amazon's internal, confidential code.
|
  • Intellectual Property. Generative AI intersects with three axes of intellectual property risks. |
    • First, it is difficult to discern whether AI-generated output contains or resembles IP that belongs to third parties. The training data for generative AI tools has not been disclosed. It is assumed that a significant portion of the training data was available on the internet. Thus, it could be — as an example — that a generative AI tool's answer to a prompt contains unlicensed excerpts from a copyrighted work, or an image owned by someone else. Getty Images has announced it is suing Stability AI (the company behind Stable Diffusion) for allegedly unlawfully copying and using "millions" of Getty-owned images to train its AI — suggesting Getty believes Getty-owned works (or derivatives thereof) may be in Stable Diffusion outputs. If AI output resembles important IP of other companies, it is unlikely they will let your organization use it simply because a generative AI tool happened to ingest it during training.
    • Second, if AI tools are used to create code, their output may contain open source software (OSS). OSS can pose a host of IP challenges. Several OSS licenses require attribution to original authors. Other OSS licenses have more draconian consequences, particularly if OSS code is modified and integrated into other code; these OSS licenses can require publication of the new and modified open-source code — with the proprietary additions — to the world.
    • Lastly, it is not clear what IP rights users of generative AI tools can claim over creations of AI-powered tools. Per OpenAI's TOUs, as between companies that use ChatGPT and OpenAI, OpenAI "assigns to you all right, title, and interest" to ChatGPT-generated output. This is designed to settle ownership between OpenAI and ChatGPT users — but note, it does not mean ChatGPT users can necessarily claim copyright or patent rights to ChatGPT's output. Indeed, "authorship" and "inventorship" rules are still being worked out for AI-created works or inventions. Companies should not automatically assume they will be able to register IP protections for works or inventions that are created or assisted by AI.
|
  • Cybersecurity. Generative AI's "everything that users input can be ingested to improve the model" approach can present security risks. |
    • As stated above, employees can readily create accounts with generative AI providers and start using their tools, all via a standard web browser. This potentially creates a new data loss risk vector. Existing data loss prevention tools may not detect if employees input restricted data into generative AI tools. These tools may thus potentially expose restricted data outside the organization.
    • Additionally, generative AI can be used to create code. But, there does not appear to be any published research on whether their code contains vulnerabilities or malicious elements. This risk can be mitigated by running AI-created code through a security scan — but that assumes that employees using generative AI coding tools are putting AI-created code through secure software development processes. If AI-powered coding occurs outside of secure software development cycles, it could possibly become a vector for introducing vulnerable code into the organization's source code.
|
  • Privacy and Data Protection. Most companies considering generative AI will likely be subject to U.S., EU, or UK privacy and data protection laws (or a combination of them). These require several considerations when using generative AI. |
    • Personal data input into generative AI tools may become part of the AI model itself, and start appearing to other users. This can have privacy compliance impacts. It is unclear whether data can be deleted from generative AI models — which could impact individuals' rights to request deletion of personal data. In some cases, U.S. and EU/UK laws can require affirmative consent to process "sensitive" data, suggesting it should not be input freely into generative AI tools.
    • Privacy laws typically require companies to classify their vendors as processors, independent controllers, or joint controllers. Depending on which role the vendor plays, certain contractual terms are mandated by law. Which role is appropriate for generative AI tools? Many current generative AI tools are silent on the issue, while emerging enterprise solutions suggest a "data processor" model.
    • Higher-risk use cases of generative AI may trigger requirements to carry out a data protection impact assessment (DPIA) under the EU/UK GDPR, or to carry out a "data protection assessment" under U.S. state privacy laws.
    • U.S. and EU/UK data protection laws regulate "automated decision-making" that results in "legal or similarly significant effects." Any use of generative AI for high-impact decisions affecting individuals or small businesses could potentially implicate these rules.
|
  • Accuracy and Reliability. Generative AI, like any new and evolving technology, should not yet be considered reliable. None of the generative AI tools on the market have made their training data public, meaning there is no indication of whether training data sets themselves display accuracy or reliability. It could be that training data included a broad set of data generally available from the internet, with the resulting wide swings in quality that one finds online. Further, ChatGPT training data reportedly all predates the year 2021, meaning answers will pre-2021 data. It cannot be determined how often generative AI tools will provide accurate or inaccurate answers. However, at present, it seems settled that generative AI tools will at times provide incorrect answers to queries. This suggests a layer of human review remains necessary at this stage. |
    • As an example, OpenAI CEO Sam Altman tweeted: "ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. It's a mistake to rely on it for anything important right now." ChatGPT's website states: "ChatGPT sometimes writes plausible-sounding but incorrect … answers." Thus, OpenAI appears to encourage ChatGPT users to add a layer of human review to ChatGPT outputs.
    • As another example, Google recently introduced its "Bard" feature in search as a ChatGPT alternative. But, during its first public demonstration, Bard incorrectly stated that the James Webb Space Telescope was the first to photograph an exoplanet.
|
  • Unpredictability. Recent reports indicate that generative AI, if pushed by users, can take on a "persona" whose interactions with users become disturbing — such as declarations of love or insults. In fairness, both of the linked reports involved reporters intentionally attempting to coax unexpected responses from a ChatGPT-powered search engine. A company's internal users will typically not be doing that — instead, they simply want generative AI tools to create useful code, text, images, or documents. Still, caution is warranted in making generative AI output directly available to consumers or the public. Members of the public may attempt to "hack" AI-powered tools to make them say embarrassing or disturbing things (and indeed, there already appears to be a subreddit dedicated to doing this.)
  • Impact & Bias. As a general matter, AI can take on biases inherent in training data sets. OpenAI publicly states that although efforts are made to make ChatGPT refuse inappropriate requests, ChatGPT will sometimes "exhibit biased behavior." Bias in AI is an increasing focus for regulators. In the U.S., the FTC, for example, has issued advisories requiring companies to accountably ensure AI they use is not biased or discriminatory. New York City requires AI used in recruiting or employment decisions to undergo a "bias audit." On the other side of the Atlantic, the UK ICO has GDPR-related guidance on addressing risks of bias and discrimination in AI systems. This may again counsel against making generative AI outputs directly available to the public, or using AI outputs in ways that impact consumers, prior to an enterprise-level agreement and validation having been put in place.
  • Compliance Programs. Companies in regulated industries often maintain compliance programs designed to adhere to — for example — financial services regulatory regimes, anti-bribery regimes (such as the U.S. Foreign Corrupt Practices Act), and similar. Generative AI tools may not be subject to the required level of monitoring these programs require for auditability. For example, if an employee "discusses" potential securities trading strategies a generative AI chatbot, these could be communications that would otherwise need to be monitored and retained in accordance with financial services regulatory requirements.
  • Lawyer-Client Privilege. Lawyers using generative AI tools run the risk of waiving legal privilege in some jurisdictions. If, as noted above, information input into generative AI tools may become available to others, privilege may be waived by using otherwise privileged information in a generative AI tool. Legal departments must weigh the risk of waiving privilege before using ChatGPT or similar tools in a manner that includes placing privileged information in a prompt.
  • Liability may Fall to the Corporate User. At present, generative AI providers generally limit their own liability, while requiring indemnity from users. As an example, OpenAI's TOUs limit OpenAI's liability to $100 USD in direct damages (or 12 month's fees – but ChatGPT is often used for free). Users indemnify OpenAI for claims arising from or relating to "your use of the Services," including from the "Content" output by generative AI tools. These terms are not necessarily unusual for a no-cost service. But, they may not be terms corporate counsel are used to accepting for enterprise solutions used by their business.

Part Two, next issue, looks at AI-specific laws and the path forward for firms wanting to use AI in practice.

This premium content is locked for Entertainment Law & Finance subscribers only

  • Stay current on the latest information, rulings, regulations, and trends
  • Includes practical, must-have information on copyrights, royalties, AI, and more
  • Tap into expert guidance from top entertainment lawyers and experts

For enterprise-wide or corporate acess, please contact Customer Service at [email protected] or 877-256-2473

Read These Next
'Huguenot LLC v. Megalith Capital Group Fund I, L.P.': A Tutorial On Contract Liability for Real Estate Purchasers Image

In June 2024, the First Department decided Huguenot LLC v. Megalith Capital Group Fund I, L.P., which resolved a question of liability for a group of condominium apartment buyers and in so doing, touched on a wide range of issues about how contracts can obligate purchasers of real property.

Strategy vs. Tactics: Two Sides of a Difficult Coin Image

With each successive large-scale cyber attack, it is slowly becoming clear that ransomware attacks are targeting the critical infrastructure of the most powerful country on the planet. Understanding the strategy, and tactics of our opponents, as well as the strategy and the tactics we implement as a response are vital to victory.

CoStar Wins Injunction for Breach-of-Contract Damages In CRE Database Access Lawsuit Image

Latham & Watkins helped the largest U.S. commercial real estate research company prevail in a breach-of-contract dispute in District of Columbia federal court.

Fresh Filings Image

Notable recent court filings in entertainment law.

The Power of Your Inner Circle: Turning Friends and Social Contacts Into Business Allies Image

Practical strategies to explore doing business with friends and social contacts in a way that respects relationships and maximizes opportunities.