Law.com Subscribers SAVE 30%

Call 855-808-4530 or email [email protected] to receive your discount on a new subscription.

The Increase in Artificial Intelligence-Related Securities Class Actions

By Jay Dubow and Joanna Cline and Milica “Millie” Krnjaja
December 01, 2024

By Jay Dubow, Joanna Cline and Milica “Millie” Krnjaja

With the increasing prominence of Artificial Intelligence technology (AI), the potential for legal disputes regarding its use has grown. While the full scope of AI-related legal risks is still developing, both the Securities and Exchange Commission (SEC) and Federal Trade Commission (FTC) have revealed the kinds of AI-related corporate behaviors they consider problematic. The problematic corporate behavior the agencies emphasized the most is “AI Washing” — the practice of making unfounded claims about AI capabilities. Earlier this year, the FTC conveyed its readiness to use the authority under Section 5 of the FTC Act to take enforcement actions against companies that overstate their AI technology. Similarly, the SEC announced that AI is one of its examination priorities, making clear that federal securities laws apply in the AI context, and that misrepresentations about AI are no different from a liability standpoint than any other misrepresentations.
In addition to regulatory scrutiny, companies are increasingly facing private litigation risks relating to AI disclosures. Recent reports show that AI-related securities class action lawsuits are on the rise this year. These lawsuits typically target companies that develop AI models or use AI models for business purposes, and the allegations relate to AI, including machine learning and autonomous driving, among others. The earliest AI-related securities class action lawsuit was filed on March 24, 2020. Since then, there has been a total of 38 AI-related filings. Five lawsuits were filed in 2020, eight in 2021, six in 2022, six in 2023, and 13 so far in 2024.
Notable Private Class Action Lawsuits
AI-related lawsuits often target companies heavily invested in AI, including those developing or offering autonomous driving technologies. Recently, several major car companies have faced AI-related complaints due to their work on autonomous vehicles.
But car companies are not the only targets of AI-related securities class actions. For instance, the real-estate company Zillow recently became the subject of a class action complaint. The complaint alleges that Zillow misled shareholders with optimistic claims about its house-pricing tool, which was supposed to enable quick house transactions by leveraging AI to map millions of data points. According to the complaint, the tool’s forecasting capabilities were unreliable, leading to its shutdown. This wind-down allegedly resulted in significant losses for Zillow and a decrease in its stock price.
Additionally, numerous securities complaints have been filed against data and technology companies. For instance, on Feb. 21, 2024, shareholders brought a complaint against Innodata Inc., a global data engineering company. The complaint alleges that Innodata, which presented itself as an AI pioneer, falsely claimed to use AI-powered operations for data preparation while actually relying on offshore manual labor and underfunding its AI research and development. The complaint’s allegations stem from the publication of a short seller research report, which was followed by a more than 30% decline in the company’s stock price.
Similarly, on July 19, 2024, a securities class action was filed against Oddity Tech Ltd., a consumer technology company offering an AI-driven online platform for beauty and wellness products. The complaint alleges that Oddity Tech made materially false and misleading statements, overstating its AI technology and capabilities and the extent to which the technology drove its sales.
More recently, on Sept. 4, 2024, a securities class action was filed against GitLab Inc., a global software company that designs and develops software solutions. The complaint alleges that GitLab and its executives misled investors about the company’s ability to develop and incorporate AI features into its platform, which they claimed would optimize code generation, increase market demand, and make software development more affordable. Despite these positive statements, GitLab allegedly faced weak market demand for its AI features and incurred increasing expenses. According to the complaint, the reality of the situation became apparent on March 4, 2024, when GitLab revised its full-year guidance for 2024 downward, leading to a significant drop in its stock price.
Recently, we have also seen a securities class action complaint that involves both AI and SPAC-related allegations. While SPAC-related litigation has been one of the most significant securities litigation trends in recent years, SPAC-related lawsuits are beginning to slow down. However, several recently filed class actions combine the two trends, including the lawsuit against Evolv Technologies Holdings.
Evolv Technologies, which describes itself as a leader in AI-based weapons detection for security screenings, became a public company in 2021 after it merged with a SPAC called Newhold Investment Corp. According to the complaint, Evolv stated in various filings that unlike conventional metal detectors, its products used advanced sensors, AI software, and cloud services to detect weapons. The company also claimed its products were independently and reliably tested. In November 2022, however, news reports accused Evolv of colluding with independent testing services to hide test results. In May 2023, BBC reported that Evolv’s systems, which are used in hundreds of U.S. schools, allegedly failed to detect a knife used to attack another student. Consequently, the plaintiffs filed a securities class action alleging that Evolv overstated the effectiveness of its products.
Because the lawsuits have recently been filed, their outcomes remain to be seen. Potential weaknesses in these cases include the challenge of proving that companies intentionally misled investors, as it is plausible that companies’ statements about AI capabilities were based on reasonable projections and that any failures could not have been foreseen. Additionally, the plaintiffs will have difficulty in establishing a link between the alleged misrepresentations and financial losses. But even if ultimately unsuccessful, these lawsuits can still require companies to incur large costs if they survive a motion to dismiss.
Recent Regulatory Actions
The SEC has brought a number of AI-related enforcement actions this year. Recently, for example, the SEC filed a lawsuit against Destiny Robotics Corp. and its founder, accusing them of misleading investors about the company’s ability to develop an AI-infused hologram and a robot for household use. According to the complaint, the company assured investors that its products would be capable of forming deep and meaningful relationships with humans and assist with complex tasks such as crisis management, psychological therapy, and childcare. The complaint claims that none of these promises were true because the hologram was limited to basic functions and Destiny’s first robot prototype was far from the socially intelligent humanoid robot represented to investors. Ultimately, the company abandoned the projects and the company ceased operations, causing investors to suffer a total loss.
Another recent example involves Rimar Capital, a New York-based investment advisory firm. On October 10, 2024, the SEC announced that it settled charges against the firm, its related entity, its CEO, and a board member, for allegedly misleading investors about the firm’s use of artificial intelligence to perform automated trading for advisory client accounts in a range of products, including equities, futures, and crypto assets. Without admitting or denying the charges, the defendants agreed to the entry of an order finding antifraud violations and to cease and desist from violating the charged provisions. The firm’s CEO was ordered to pay disgorgement totaling around $213,000, and a $250,000 civil money penalty, while the firm’s board member was ordered to pay a $60,000 civil money penalty. In its press release announcing the settlement, the SEC noted that the firm’s CEO “lured investors and clients with multiple fabrications, including with buzzwords about the latest AI technology.” The press release also made clear that “as AI becomes more popular in the investing space,” the SEC “will continue to be vigilant and pursue those who lie about their firms’ technological capabilities and engage in AI washing.”
Notably, these regulatory actions frequently target not only the companies, but also their officers and directors. For instance, the SEC recently brought fraud charges against the CEO, CFO, and audit committee chair of Kubient, a digital advertising technology company. According to the complaint, Kubient’s CEO fabricated reports that Kubient had successfully tested an AI software program that detects real-time fraud during digital advertising auctions, allowing Kubient to recognize $1.3 million in revenue leading up to its IPO. The complaint alleges that Kubient should not have recognized the revenue because it did not actually perform the tests. This case is significant because while the alleged fraud scheme was initiated by the CEO, the SEC also brought charges against the company’s CFO and the audit committee chair, noting that they were required to take more affirmative steps, especially because the concerns about the alleged scheme were raised internally.
|

Takeaways

There is no doubt that AI technology presents significant opportunities for companies. Yet with the swift rise of AI come various liability risks. Like with other emerging technologies, companies are under increasing pressure to integrate AI into their operations and launch AI-related strategies. But while many companies recognize the opportunities that AI presents, they also must stay informed about the potential risks.
As highlighted above, the pressure to quickly adopt AI creates the risk that companies might not accurately represent their AI capabilities. Regulatory agencies have repeatedly expressed concern that companies would exaggerate their AI capabilities to capitalize on the current market enthusiasm — the phenomenon the SEC Chair Gary Gensler named AI Washing. We have already seen multiple AI Washing related SEC enforcement actions. But in addition to regulatory concerns, the recent filings show that allegations related to AI Washing are now increasingly present in securities class actions as well. Without ever explicitly using the expression, recently filed AI-related securities class actions accuse companies of AI Washing. That is, the lawsuits allege that companies misrepresented their AI capabilities or failed to disclose risks associated with the use of AI.
This increasing vigilance of regulatory agencies and private litigants makes it imperative for companies to maintain transparency and integrity in their AI-related disclosures. While there is nothing that companies can do to avoid these issues altogether, they can take steps to minimize potential liability.
The companies should first consider implementing AI-related board and management training and education. Companies’ directors sit above the management, and it is their job to take an independent view of the company oversight. Now that AI is at the forefront, boards should consider establishing rules and guidelines for its use. Because these rules could significantly influence the trajectory of this powerful technology, companies’ directors should educate themselves on both the risks and opportunities that come with AI. Occasional consulting with AI experts can also provide valuable insights.
Companies should also consider board structure and reporting processes with regard to AI. For instance, companies may establish committees focusing on companies’ use of AI, including AI-related privacy, confidentiality, and disclosure issues. Additionally, companies should consider the impact of AI on their controls and, if appropriate adopt mechanisms for reporting on AI activities to senior management and the board. By adopting these strategies, boards can position themselves to better manage AI-related risks and defend against potential legal challenges.

This premium content is locked for Entertainment Law & Finance subscribers only

  • Stay current on the latest information, rulings, regulations, and trends
  • Includes practical, must-have information on copyrights, royalties, AI, and more
  • Tap into expert guidance from top entertainment lawyers and experts

For enterprise-wide or corporate acess, please contact Customer Service at [email protected] or 877-256-2473

Read These Next
'Huguenot LLC v. Megalith Capital Group Fund I, L.P.': A Tutorial On Contract Liability for Real Estate Purchasers Image

In June 2024, the First Department decided Huguenot LLC v. Megalith Capital Group Fund I, L.P., which resolved a question of liability for a group of condominium apartment buyers and in so doing, touched on a wide range of issues about how contracts can obligate purchasers of real property.

The Power of Your Inner Circle: Turning Friends and Social Contacts Into Business Allies Image

Practical strategies to explore doing business with friends and social contacts in a way that respects relationships and maximizes opportunities.

Strategy vs. Tactics: Two Sides of a Difficult Coin Image

With each successive large-scale cyber attack, it is slowly becoming clear that ransomware attacks are targeting the critical infrastructure of the most powerful country on the planet. Understanding the strategy, and tactics of our opponents, as well as the strategy and the tactics we implement as a response are vital to victory.

CoStar Wins Injunction for Breach-of-Contract Damages In CRE Database Access Lawsuit Image

Latham & Watkins helped the largest U.S. commercial real estate research company prevail in a breach-of-contract dispute in District of Columbia federal court.

Fresh Filings Image

Notable recent court filings in entertainment law.