With AI becoming more and more pervasive in our lives, will in-house counsel look at AI singing Styx’s classic “Domo Arigato, Mr. Roboto” or will they be screaming R.E.M.’s hit “It’s the End of the World as We Know It”? AI is ever-present lately in the news. The stock market loves AI companies, then hates them, then loves them again! There’s a new story every day about how someone used AI for good or someone developed a voice activated AI-Powered Gun Turret (yes, that really happened, yes it is scary). The continued growth and attention on artificial intelligence resulting from platforms such as ChatGPT, Bard, DALL-E, and Midjourney’s use of large language models, has created tremendous excitement (and some fear) about possibilities for companies.
It feels as if there has been endless discussion about what AI could eventually do for us all, but not much discussion on what it is doing for attorneys in the moment. This presentation hopes to have a collaborative and collective discussion about AI and its role with in-house counsel in the past, present, and future. In-house counsel’s role in handling AI within a company is, much like the status of the law, ever-evolving and subject to change on a moment’s notice. While considering ways that they can leverage it within their own group, in-house counsel also must consider what is best for the company and identify and manage the risks that are inherent with the house of AI.
Yet, in-house and outside counsel must seek to harness AI’s benefits while protecting themselves and their clients (internal and external) from the risk inherent with use – and misuse – of this rapidly developing tool and detriments that could coincide with the use of this emerging technology so that they can efficiently and effectively manage it and leverage it for the benefit of their corporate client and in-house legal department.
History of AI
Since at least the 1950s, AI as a concept has been mentioned throughout society. Early iterations of AI focused on developing very simple computer programs that could play simple games like checkers. In 1997, the AI chess supercomputer Deep Blue defeated Gary Kasparov in a televised chess match and it was the first time a computer has beaten a reigning champion.[1] More recently with constantly evolving computer power, the public has become obsessed with recent iterations of AI platforms that are increasingly popular.
Recent developments for AI have been as much about computing power as they have been about new “technology.” Setting aside the ever-increasing computing power, the other primary driver is Large Language Model. Large Language Model are algorithms that are intended to summarize, translate, predict, and generate text to convey ideas and concepts. These models rely on incredibly large data sets to “feed” the algorithm and allow it to “learn,” essentially predicting future outcomes based on past results.[2]
In October of 2022, the White House published a “Blueprint for an AI Bill of Rights.”[3] The Whitehouse’s “Blueprint for an AI Bill of Rights is a set of five principles and associated practices to help guide the design, use, and deployment of automated systems to protect the rights of the American public in the age of artificial intelligence.”[4] The five principles in the Blueprint for an AI Bill of Rights are: Safe and Effective Systems; Algorithmic Discrimination Protections; Data Privacy; Notice and Explanation; and Human Alternatives, Consideration and Fallback.
The European Union reached a provisional agreement on a landmark act referred to as the Artificial Intelligence Act that is intended to govern the use of AI in EU member countries. “The accord requires foundation models such as ChatGPT and general-purpose AI systems (GPAI) to comply with transparency obligations before they are put on the market. These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.”[5] The EU AI act takes a risk-based approach that varies depending on whether a use case of AI is considered minimal/no-risk, limited-risk, high-risk, and unacceptable-risk. Depending on the categorization, certain use cases are barely complete while others must comply with stringent disclosure requirements or risk substantial fines or exposure to individual citizen complaints.
The seminar invites an open discussion between in-house counsel and outside counsel to discuss AI and the role of in-house counsel. Topics to be covered include AI generally, its potential benefits, potential risks, how it is currently being used, and how we envision its use in the future.
Types of AI – what they are, what they do, how they can be leveraged and the potential pitfalls
There are three (3) general categories of recognized AI – Narrow AI, General AI, and Super AI. Currently, the only AI that exists is what is known as Narrow AI and the other two (2) categories are theoretical concepts. Narrow AI includes several different types based on functionality. The two (2) primary types are briefly explained below:
Reactive Machine AI
Reactive Machine AI is designed to perform a single task with no memory. This type of AI performs its task using an existing data set and does not store its previous decisions. Common examples that people experience (likely without realizing) would be Spotify or Netflix recommendations to users (i.e. analyzing what a user has watched and recommending different material based on that existing data set). The original AI programs designed to play games would also fall under this category because its decision making is based on the currently available pieces on a board.
Limited Memory AI
Limited Memory AI is “AI [that] can recall past events and outcomes and monitor specific objects or situations over time. Limited Memory AI can use past- and present-moment data to decide on a course of action most likely to help achieve a desired outcome.”[6]
Generative AI
Within Limited Memory AI is Generative AI, which is what you currently see most in the news. Generally speaking, generative AI is a type of artificial intelligence that can produce outputs (text, audio, video, code, images, etc.) based on prompts. The most common examples of this are ChatGPT, Dall-E, and Microsoft Copilot.
In order to address the legal issues that arise from AI, we need to first consider the possible uses within a company or in-house legal department in order to identify and discuss the potential risks. With the growth of AI technologies, the applications have become innumerable and are constantly changing.
Business and Legal In-House Uses for AI
The evolution and proliferation of AI products has spurred countless new companies that are focusing on the newest AI product to sell to businesses. The potential applications for AI seem limitless but there appear to be dominant categories that have embraced early adoption within corporate environments and in-house legal groups.
Document Review and Management and E-Discovery
AI significantly enhances document and email summarization, saving time and improving efficiency. It quickly distills large volumes of text into concise summaries, helping professionals focus on key details without information overload.
These tools boost productivity by extracting insights, aiding decision-making in business, legal reviews, and project management. AI-generated summaries ensure clear communication and alignment across teams, enhancing collaboration.
In legal e-discovery, AI streamlines document review by rapidly analyzing and categorizing data, reducing manual effort. Machine learning improves accuracy over time, minimizing errors and ensuring thorough, reliable results. AI integration accelerates legal processes while enhancing precision and defensibility.
Contract Drafting and Contract Life Cycle Management
Generative AI drafts contracts based on user prompts, with many companies specializing in this function. AI offers a powerful tool for contract drafting, accelerating the process and improving accuracy by analyzing large datasets, identifying patterns, and generating consistent, legally compliant drafts.
Beyond automation, AI’s natural language processing can interpret complex legal language, detect nuanced clauses, and identify risks. It also learns from past contracts, enhancing negotiation strategies and contract terms. By streamlining drafting and reducing errors, AI helps organizations create stronger, legally sound agreements with greater efficiency and confidence.[7] Even more, AI platforms have the ability to track numerous contracts and their interrelated terms as well as tracking contract provisions for future issues whether it be notices required, deadlines to renew contracts, or tracking number of licenses within the company and the ability to increase or decrease the same.
Legal Research
Another area that has seen early adoption is the use of AI for purposes of legal research. Specifically, well-known and dominant legal research service companies have created platforms (e.g. WestLaw Edge and WestLaw Precision and Lexis+ AI) that claim to leverage AI technology to assist users even further with legal research. This includes natural search language (more along the lines of searching in Google as opposed to these companies previous “terms and connectors” search feature that was utilized in traditional research programs). Additionally, these platforms are designed to help users find pertinent case law efficiently. These platforms also cite check briefs and summarize the cited law both for relevance and accuracy. They even suggest alternative citations when the platform believes there is stronger case law to support certain propositions.
One of the issues lawyers and firms face is the additional costs these features add beyond the already expensive subscription fees that firms pay for legal research platforms. Moreover, many clients will not pay for portions of the firm’s subscription fees for electronic research. Beyond the associated costs, attorneys must still review the cited case law to ensure that the AI platform’s suggestions are accurate to comply with their ethical obligation related to court filings. This could increase the time required to prepare a brief, but is something lawyers also had to due when using previous iterations of platforms to ensure that the citation, for example, was not made to a “headnote.”[8] These issues lead to an obvious cost/benefit analysis that firms and clients have to review to determine whether this added technology truly increases efficiency.
Customer Interactions (Chat Bots/Virtual Assistants)
Over are the days of calling a call center staffed with hundreds of people. Generative AI now allows for the complete automation of customer interactions. AI platforms now provide companies the ability to interact directly with customers and answer their questions in real-time via written prompts but also by video interactions. This technology has advanced to the point where it is virtually indiscernible from interacting with a human being.
In fact, the technology has advanced so far that it is possible to mimic individuals to such a convincing degree that it is being used for cyber scams. Most recently, a finance worker was tricked into transferring $25 million dollars after a video conference call with several of the workers co-workers and an alleged CFO from a different branch of the company.[9]
Use of AI in Employment Decision Making
AI is transforming HR by streamlining hiring, engagement, and employee development. Many AI tools claim to replace HR teams by processing large volumes of applicant data, enhancing decision-making, and improving efficiency.
In talent acquisition, AI analyzes resumes, screens candidates, and identifies top talent, reducing bias and improving selection quality. It also boosts employee engagement by analyzing performance data and sentiment to predict issues, enabling proactive HR interventions. AI-driven chatbots handle routine inquiries, freeing HR for strategic tasks.
For learning and development, AI personalizes training based on employee skills and preferences, fostering continuous growth. By leveraging AI, companies can enhance efficiency, make data-driven decisions, and improve workforce management.[10]
Generating Company Intellectual Property
The most popular and recent widely publicized consumer use of AI are the generative platforms that you see creating outputs based on a prompt entered by the user. Photographs, short stories, company logos, or a multitude of other options are created quickly and simply by typing a prompt into the platform and AI will produce it for you within a matter of seconds. But, if businesses are utilizing this technology for commercial purposes, it can create significant headaches for in-house counsel down the road.
Generative AI platforms, which create content from user prompts, have become widely popular. While useful for businesses, their commercial use raises legal concerns, particularly regarding intellectual property (IP) rights.
Many AI platforms are facing lawsuits over how they train algorithms, leading to agency guidance and potential legal changes. IP law, rooted in Article 1, Section 8, Clause 8 of the U.S. Constitution, grants authors and inventors exclusive rights to their creations. This foundation led to the establishment of the USPTO and the U.S. Copyright Office, which now play key roles in addressing AI-related IP challenges.[11]
Legal Risks Related to the Potential Uses of AI and How to Limit Risk
Employment Discrimination
Several states and cities, including Illinois and New York City, have restricted AI in job applicant screening due to concerns over bias. [12] These laws aim to prevent AI from perpetuating existing workforce biases, which could lead to discriminatory hiring decisions based on protected characteristics like race or gender.
Another key issue is AI transparency. Many models, especially deep learning systems, function as “black boxes,” making it difficult to explain hiring decisions. This lack of clarity raises legal risks, as employees have the right to understand employment decisions affecting them. If an employer cannot justify an AI-based hiring decision, they may face discrimination claims.
In the EU, AI cannot make final hiring or firing decisions under Art. 22 GDPR. AI may assist in HR decisions, but a human must have the final say to ensure fairness and compliance with employment laws.
Data Privacy and Cybersecurity issues
There are significant data privacy risks that can arise from using AI. These risks vary but primarily arise from two things: what information is being shared with AI and how that AI is being used with that information. To further complicate matters, these concerns vary significantly depending on where in the world your data arises and how you are using the AI platform.
In the US, there is no singular, universally applicable data privacy law (yet), but certain industries have statutes in place or are starting to see regulatory frameworks that would govern individual privacy and company’s obligations related to privacy. Nearly every state at this point has some sort of data privacy statute; they vary widely and some have robust requirements and protections in place, while others have very little. The EU has passed limited acts including the Artificial Intelligence Act (“AI Act”) and it is being celebrated as being as innovative as the technology it attempts to regulate. The AI Act has obvious shortcomings, and the already existing law has to fill the void. Recognizing the potential threat to people’s safety, livelihoods, citizens’ rights, and democracy posed by certain applications of AI, the co-legislators agreed to prohibit certain types of uses of AI technology including biometric categorization systems that use sensitive characteristics; untargeted scraping of facial images from the internet or CCTV footage; emotion recognition in the workplace and educational institutions; and other potentially sensitive uses.
There are exceptions to the exclusions. Like the United States, the rest of the world is quickly adapting to this emerging technology and enacting laws to address perceived potential issues.
Copyright Issues
The rise of generative AI has prompted rapid changes at the Copyright Office, leading to new guidance. The office has ruled that AI cannot be an “author” under copyright law, citing the Constitution and the Copyright Act.
It assesses whether a work is primarily human-created, with AI as an assisting tool, or if AI-generated elements lack human authorship.[13] If AI independently produces complex works from simple prompts, the technology—not the human—is deemed the author.[14]
This legal landscape has spurred copyright infringement lawsuits against AI platforms for allegedly using copyrighted works to train their models.[15] Copyright holders argue AI platforms benefit from their work and, in some cases, reproduce it in responses to user prompts.
Ultimately, these cases hinge on the “fair use” doctrine, which allows limited use of copyrighted material for purposes like commentary, criticism, and news reporting, with no strict rules on word count or proportion. Fundamentally, these cases will center on what constitutes “fair use” for copyright purposes. “Under the fair use doctrine of the U.S. copyright statute, it is permissible to use limited portions of a work including quotes, for purposes such as commentary, criticism, news reporting, and scholarly reports. There are no legal rules permitting the use of a specific number of words, a certain number of musical notes, or a percentage of a work. Whether a particular use qualifies as fair use depends on all the circumstances.”[16]
Patent Issues
It is worth briefly noting that in 2020, the USPTO has determined that the number of AI-related patent applications increased from less than 10,000 annually in 2005 to almost 80,000 AI-related patent applications.[17] This explosion in volume of AI patent applications has obviously resulted in an increasing number of granted AI patents (which the USPTO estimated to be as many as approximately 450,000 in total as of 2020[18]), but it has also raised interesting issues that the USPTO has had to review and address.
Similar to how the Copyright Office has treated AI-generated works for purposes of copyright registration, the USPTO has reached a similar conclusion and provided guidance that AI cannot be considered an “inventor” for purpose of patent protection. The Patent Office’s conclusion is bolstered by the holding of the United States Court of Appeals for the Federal Circuit in the 2022 case of Thaler v. Vidal.[19] This case centered on “the question of who, or what, can be an inventor” and whether an AI program can be listed as an inventor on a patent application filed with the USPTO.[20] The plaintiff in this action was an individual who created an AI system. That individual claimed that the AI system created two (2) new inventions, and he filed two (2) patent applications related to the AI created inventions.[21] Each of the patent applications listed the AI system as the sole inventor. The USPTO concluded that the applications lacked an inventor and requested that the plaintiff identify the valid inventor(s). The plaintiff contested the notice and then sought judicial review.[22]
The federal district court sided with the USPTO and entered summary judgment in its favor finding that an inventor has to be a “individual” under the Patent Act and the plain meaning as used in the statute is a natural person.[23] The United States Court of Appeals for the Federal Circuit agreed with the district court and concluded that for purposes of patent protection, an inventor has to be an individual and that “Congress has determined that only a natural person can be an inventor, so AI cannot be.”[24]
How to effectively Manage the Use and Risks of AI in the Workplace
The first, and most obvious, step for any in-house counsel is to ensure that they understand how their business client is using (or wants to use) AI in business operations.
Addressing Through Contract.
Another way to address potential concerns regarding privacy and data/cybersecurity head on is by negotiating directly with the AI providers and considering their “enterprise” solutions. Admittedly, the current focus of many companies is solely on subscriptions, which are not subject to contract negotiations. Several AI providers have started to recognize the concerns businesses may face and have created off-shoot products targeted towards corporations. Specifically, ChatGPT now offers an “Enterprise” product where customer prompts and data are not stored or used for their training models and include increased levels of data encryption and SOC 2 cybersecurity compliance.
AI Policy Drafting
The implementation of an internal AI guideline may be advisable for companies using and working with AI, covering the points:
- permitted and unpermitted technologies;
- which employees within a company have access to use the technology (and who does not);
- copyrights and licenses (It has proven helpful to explicitly name the providers to be used by employees in the field of artificial intelligence);
- data protection;
- handling of business secrets and confidential information;
- labelling obligations and transparency;
- training and awareness-raising; and
- monitoring and enforcement.
Collaboration with Stakeholders
The most obvious and hopefully the most utilized way to head off issues before they occur is early and frequent collaboration with business stakeholders. This critical effort is an absolute necessity when considering the potential benefits and detriments of using AI in the workplace. This obviously poses difficulties for in-house counsel as well as their outside counterparts as the scale of a business increases. Finding ways to be top-of-mind with business users to ensure that there is open and frank communication about what emerging technologies are being used and how they are being used will ultimately be used.
Adapting Employment Contracts
The employer can prohibit employees from using AI by virtue of its right to issue instructions. The right to issue instructions includes the question of which work equipment employees may or must use. To avoid ambiguity regarding the permissibility of using AI, the employer should establish clear rules in this regard and, if the use of AI is desired by the employer, which AIs may be used. Furthermore, any work done by AI should be labelled as such.
Bringing the Technology In-House
Companies have begun building their own AI platforms and hosting them “on premise.” On premise means that the company is hosting the AI platform on its own assets (local servers) that are owned and managed by the company and access is limited to only individuals within the company. This gives companies great control over privacy concerns but at a significant monetary and administrative cost. The obvious build-out costs, including both time and assets to create the system, are significantly higher than it would be for a platform that is cloud-based. Additionally, once an on-premises solution is created, it still has to be maintained, which incurs even more time and cost.
Conclusion
Ultimately, the benefits that AI offers are industry changing, but come with a slew of potential downsides and legal headaches for in-house counsel and management to analyze and mitigate. In-house counsel will have to act as a collaborative partner to identify, assess and mitigate risk while also balancing the real-life practical benefits that AI can offer to companies.
[1] Rockwell, Anyoha, The History of Artificial Intelligence (August 28, 2017), https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/
[2] For digestible explanation consider: Timonthy B. Lee and Sean Trott, A jargon-free explanation of how AI large language models work (7/31/2023), https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/.
[3] Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, Whitehouse.Gov (October 2022), https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf
[4] Id.
[5] Foo Yun Chee, Martin Coulter and Supantha Mukherjee, Europe agrees landmark AI regulation deal (December 11, 2023, 10:29 AM CST), https://www.reuters.com/technology/stalled-eu-ai-act-talks-set-resume-2023-12-08/
[6] IBM Data and AI Team, Understanding the different types of artificial intelligence (October 12, 2023), https://www.ibm.com/blog/understanding-the-different-types-of-artificial-intelligence/.
[7] Portions of this section were drafted by Open AI’s ChatGPT platform and modified by the authors of this paper.
[8] Headnotes are short statements of law provided at the beginning of opinions on legal research service website, which have been written by staff attorneys at the legal research service companies and sometimes are not accurate understanding or summaries of a Court’s opinion or the law in a jurisdiction.
[9] Heather Chen and Kathleen Magramo, Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’ (Feb. 4, 2024, 2:31 AM EST), https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html.
[10] Portions of this section was drafted by Open AI’s ChatGPT platform and modified by the paper’s authors.
[11] Milestones in U.S. Patenting, United States Patent and Trademark Office website – USPTO.gov. https://www.uspto.gov/patents/milestones (last visited February 12, 2024).
[12] Alonzo Martinez, Balancing Innovation And Compliance: Navigating The Legal Landscape Of AI In Employment Decisions (October 31, 2023, 6:54 EDT), https://www.forbes.com/sites/alonzomartinez/2023/10/31/balancing-innovation-and-compliance-navigating-the-legal-landscape-of-ai-in-employment-decisions/?sh=75311382da2f.
[13] Copyright Registration Guidance: Works Containing Materials Generated by Artificial Intelligence, United States Copyright Office, p.4, https://www.copyright.gov/ai/ai_policy_guidance.pdf (quotations omitted) (last visited February 12, 2024).
[14] Id.
[15] Matt O’Brien, ChatGPT-maker braces for fight with New York Times and authors on ‘fair use’ of copyrighted works (January 10, 2024, 3:05 PM CST), https://apnews.com/article/openai-new-york-times-chatgpt-lawsuit-grisham-nyt-69f78c404ace42c0070fdfb9dd4caeb7; Matt O’Brien, Sarah Silverman and novelists sue ChatGPT-maker OpenAI for ingesting their books (July 12, 2023, 1:56 PM CST), https://apnews.com/article/sarah-silverman-suing-chatgpt-openai-ai-8927025139a8151e26053249d1aeec20; Jocelyn Noveck and Matt O’Brien, Visual artists fight back against AI companies for repurposing their work (August 31, 2023, 1:55 PM CST), https://apnews.com/article/artists-ai-image-generators-stable-diffusion-midjourney-7ebcb6e6ddca3f165a3065c70ce85904
[16] Can I Use Someone Else’s Work? Can Someone Else Use Mine?, United States Copyright Office, https://www.copyright.gov/help/faq/faq-fairuse.html#:~:text=Under%20the%20fair%20use%20doctrine,news%20reporting%2C%20and%20scholarly%20reports (last visited February 12, 2024).
[17] Artificial Intelligence (AI) Trends in U.S. Patents, United States Patent and Trademark Office website – USPTO.gov, https://www.uspto.gov/sites/default/files/documents/Artificial-Intelligence-trends-in-U.S.-patents.pdf (last visited February 12, 2024).
[18] Please note this number includes the total number of all granted AI U.S. patents from 1976-2020.
[19] Thaler v. Vidal, 43 F.4th 1207, 1209 (Fed. Cir. 2022), cert. denied, Thaler v. Vidal, 143 S. Ct. 1783, 215 L. Ed. 2d 671 (2023).
[20] Id. at 1209.
[21] Id. at 1210.
[22] Id.
[23] Id.
[24] Id. at 1213.