Publication
Insurance regulation in Asia Pacific
Ten things to know about insurance regulation in 19 countries.
United States | Publication | 1月 2025
As artificial intelligence (AI) continues to drive business innovation, more companies are not only developing AI tools but also increasingly applying these technologies in their operations, including relying on third party vendors or through acquisitions.
The increasing prominence of AI in corporate strategies demonstrates its potential to shape the future of various industries. The public’s fascination with AI has led to a rise in AI-related disclosures as businesses seek to highlight their advancements to attract investor attention. This dynamic heightens the importance of making meaningful corporate disclosures regarding the use and impact of AI and underscores the need to revisit the control system surrounding these disclosures.
Indeed, United States regulatory authorities, such as the Securities and Exchange Commission (SEC), have expressed concern about the rise of “AI washing,” or exaggerating and misrepresenting the use of AI. In 2024, the SEC took several enforcement actions against companies that allegedly made false or misleading statements regarding their AI capabilities. For example, in 2024 the SEC charged two investment advisory firms for misrepresenting the role of AI in their investment decision-making processes.1 In January 2025, the SEC reached a non-monetary settlement with a restaurant technology company to resolve claims that it made false and misleading statements about critical aspects of its AI product, including failing to disclose that for a period of time the AI technology powering its product was owned and operated by a third party, creating a false impression that the technology was proprietary to the company and subsequently falsely claiming that its own technology eliminated the need for human order-taking when in fact the vast majority of drive-thru orders required human intervention, and that the company failed to maintain required disclosure controls and procedures.2These cases reflect a similar pattern of allegedly false or overstated AI-related claims being used to attract funding and gain the trust of the investor community.
The SEC’s enforcement actions send a strong investor protection message: any claims related to AI must be accurate and backed by evidence. In his public statements, former SEC Chair Gary Gensler emphasized that false claims about AI hurt investors and undermine market integrity. In connection with the SEC’s actions, then-SEC Enforcement Director Gurbir Grewal noted, at a compliance and enforcement conference in April 2024, that the agency will approach individual liability in AI cases similarly to how it handles cybersecurity disclosure failures. He explained that the SEC will examine whether individuals knew or should have known about misrepresentations and what actions they did or did not take to prevent misleading disclosures. While Grewal stated that management who operate in good faith and take reasonable steps to ensure accurate reporting are unlikely to face personal liability, the rapidly expanding focus on AI by reporting companies and companies that raise capital through securities offerings means that corporate leaders will face heightened risks of individual liability absent the adoption of “proactive compliance” strategies to prevent AI washing.3
The SEC’s recent actions and statements suggest that while AI represents an innovative and rapidly evolving field, the principles underlying regulatory enforcement remain unchanged. The SEC does not appear to have developed novel theories to address AI washing; instead, it is applying classic enforcement frameworks – prohibiting false or misleading statements – to new technologies, systems and their associated arrangements and prospects. These actions serve as a reminder that longstanding standards of accuracy, transparency and robust internal controls remain fully applicable, even as the context evolves with complex innovations like AI. Whether a startup or a Fortune 500 enterprise, companies must carefully review their representations about AI to ensure compliance with these established principles. Below are some key takeaways and practical tips for businesses:
Before AI is featured in SEC reports, press releases, investor presentations, earnings calls or other investor communications, companies should proactively ensure the accuracy of these statements. For public companies, this process should naturally fall within their disclosure controls and procedures. Generalized claims or hypothetical examples, in particular, need to be carefully reviewed to ensure they are relevant, accurate and appropriate for the given context.
To achieve these goals, companies should:
As companies highlight their AI capabilities and projections, a critical related question to consider is whether they should also enhance their disclosures regarding the technology’s limitations and associated risks, such as biases in algorithms, data privacy and security risks, product safety concerns and technological immaturity.4 Transparency about challenges faced by the company is important to give investors a full understanding of both the benefits and potential pitfalls of AI adoption. Clearly communicating these limitations not only helps manage investor expectations but also reduces the risk of shareholder litigation and enforcement actions if the AI fails to meet performance expectations or boost the company’s value as anticipated. Tailored and thoughtful risk factor disclosures serve as both a protective measure and a demonstration of the company’s commitment to responsible governance and transparency.
As with any AI-related claim, risk disclosures should be specific to a company’s own situation. Companies should assess whether general disclosures are sufficient or if more specifically drafted disclosures are necessary to address unique circumstances or vulnerabilities. Examining the practices of industry peers can also provide valuable insights into effective approaches to risk disclosures.
When companies engage third parties for AI technologies or acquire assets or businesses involving AI and feature these capabilities in public disclosures or investor communications, conducting thorough AI-specific diligence in advance is critical. This includes verifying the accuracy of third-party claims, understanding the technology’s functionality and limitations and identifying associated risks – similar to the considerations companies should apply to their own AI-related disclosures, as discussed above.
The importance of adopting relevant internal controls in advance of making AI-related statements and disclosures extends beyond SEC filings to include investor presentations, earnings calls, consumer marketing materials, websites, press releases, press interviews, social media posts, podcasts and other public statements. This expansive scope necessitates a coordinated and governance-driven approach, integrating disclosure controls across multiple teams – management, corporate communications, marketing, investor relations and technology departments – to ensure consistency and compliance. Given the technical complexity of AI, effective governance and disclosure require a deep understanding of the technology. Legal teams with technical expertise should play an active role in reviewing AI-related claims, ensuring they are accurate and verifiable about the company’s actual capabilities and potentials. High-level involvement from management and the board in designing and implementing a relevant internal control framework is equally crucial to mitigate the risk of AI washing and potential individual liability, as well as identifying broader risks and policy needs related to AI use.
To achieve these objectives, individuals responsible for public communications need to know the right questions to ask about the technology. Vague or exaggerated claims in investor communications, such as simply highlighting products or services being “AI-powered,” flag a need to better understand the details regarding the tools, training models and data used, and distinctions from standard rule-based analysis. Companies may also need to understand how the AI systems address inadequate or inaccurate insights (e.g., hallucinations), evaluate data and results for bias or discriminatory outcomes, the model functionality (e.g., addressing the “black box” problem), deploy robust data security measures and track performance metrics, including model-specific outcomes.
Similarly, when engaging with vendors or integrating third-party AI solutions, understanding the technology requires informed questioning to validate claims. Companies should require such thirty parties to provide clear details about their tools, their expertise in data science and AI and whether the AI is proprietary or relies on third-party models like large language models (LLMs). Companies should also inquire about any use of a “human in the loop” for quality control, review the vendor’s responsible AI policies and ethical guidelines and conduct proof-of-concept testing as needed to verify the AI tools’ legitimacy, accuracy and performance.
Effective AI governance and disclosure ultimately depend on a company’s ability to ask and comprehend the right technical questions. While companies may not be required to disclose all the underlying details, understanding them is critical for establishing robust governance policies for AI use and crafting accurate and specific disclosures and investor communications. Where internal expertise is insufficient, engaging external legal counsel with the requisite AI and technology expertise to advise on these matters can provide the necessary guidance to ensure compliance and maintain investor trust.
1 See “SEC Charges Two Investment Advisers with Making False and Misleading Statements About Their Use of Artificial Intelligence."
2 See “SEC Charges Restaurant-Technology Company Presto Automation for Misleading Statements About AI Product."
3 See “Remarks at Program on Corporate Compliance and Enforcement Spring Conference 2024."
Publication
Ten things to know about insurance regulation in 19 countries.
Subscribe and stay up to date with the latest legal news, information and events . . .
© Norton Rose Fulbright LLP 2025