Publication
2025 Annual Litigation Trends Survey
Norton Rose Fulbright has released its 2025 Annual Litigation Trends Survey, analyzing litigation trends across the legal landscape.
Author:
United States | Publication | February 16, 2018
“Artificial Intelligence” is defined as, “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”[1]
When I first signed on to write this article, I (with human ignorance) employed all my human cognitive brain cells to author and shape its form. However, knowing what I know now, I ought in hindsight to have deployed artificial intelligence (such as an IBM Watson system) so that I could simply input the title of the article and flick the switch, and the masterpiece of an article would have been written for me with my name at the end. Soon we will be replaced: our brains will atrophy; and the administration of justice will be achieved by remote robotic pieces of machinery. Until then, however, I am burdened with the task of having to write this article and, unless you have already been replaced by artificial intelligence, you will likewise be burdened by the reading of it.
According to Stephen Hawking, “[t]he development of full artificial intelligence could spell the end of the human race.” If so, we may only have a limited time before we are replaced by machines and relegated to tending our gardens and clipping our rhododendron bushes but hopefully that will not come in our lifetime. For insurers and insureds, delving into speculation or spheromancy as to our existential purpose once we have been displaced by machines, will not pay dividends. Nevertheless, the deployment of artificial human intelligence in underwriting and claims handling operations might yield significant benefits.
For example, in December 2016, a Japanese Insurer reported that it was planning to reduce 30% of its payment assessment department’s human staff after introducing an artificial intelligence system to improve efficiency. It was reported that the “cognitive technology can think like a human” and “can analyze and interpret all of your data, including unstructured text, images, audio and video.” It was proposed that the artificial intelligence system would read medical certificates written by doctors and other documents and information necessary for paying claims as well as checking coverage clauses in the insurance contracts issued so as to prevent overpayments.[2]
As novel, avant-garde, artificial human simulation systems are developed and manufactured for sale, insurers and insureds are increasingly considering their deployment in underwriting and claims handling processes in order to streamline costs and improve efficiency.
In the underwriting process:
In the claims handling process:
In the event that artificial intelligence is deployed by an insured or an insurer in underwriting and claims handling operations, a number of insurance coverage issues might arise in light of the fact that artificial cognitive technology is making decisions which are typically made by human beings (e.g., the executive officer, risk manager of an insured, or, in the case of an insurer, an underwriter and/or claims professional). Therefore, facts and matters that are typically within the subjective knowledge of a particular individual are within the “knowledge” of artificial cognitive technology.
This article explores: (i) the potential insurance coverage issues that might arise under excess casualty insurance policies such as the Bermuda Form Policy[5] in the event that artificial intelligence is deployed in claims handling or underwriting activities by an insured or insurer, (ii) the difficulties involved in dealing with those issues, and (iii) a practical analysis as to how those issues might be dealt with.
I should note, by way of caveat, that this article is not intended to provide an exhaustive list or answer to the insurance coverage issues that might be, or potentially are, implicated. Rather, it is intended to provide an overview of some of the potentially challenging issues that might arise and how they might be approached by insurers and insureds in what is still very much an innovative and developing area. Indeed, the ultimate impact of artificial intelligence upon the insurance world remains to be seen in relation to matters such as: (i) the insurance coverage issues that might arise, (ii) new insurance products and/or policies that are written, and (iii) more generally, the landscape of the insurance industry and whether human involvement will be replaced by artificial intelligence machinery, as Dr. Stephen Hawking predicts.
Some of the key insurance coverage issues that might arise may be summarized as follows:
Each of these issues is considered, in turn, below. It is assumed, for the purposes of this article, that the governing law of the policy is (as is typically the case in Bermuda Form policies) that of New York.
The intrinsic problem underpinning the issues identified above is that each involves either a subjective inquiry as to facts and matters that are within the actual knowledge and understanding of the insured or the insurer, or alternatively, an objective inquiry as to matters of which the insured or insurer ought to have been aware and how a prudent insured or insurer ought, therefore, to have acted. The fait accompli presented to any Tribunal is the fact that it is the artificial intelligence machinery which has those facts and matters within its “knowledge” and/or “understanding.” The question is how can one prove the subjective knowledge of an artificial human simulation system? Unless technology advances significantly in the very near future, it is doubtful that artificial intelligence can be called upon to give testimony to explain the facts and matters which were within its knowledge, how it acted and why. Does this mean that an insured or insurer can, therefore, never be responsible for facts and matters that are within the knowledge of a machine? Or will the software of the artificial intelligence machine provide the answers by the results of simulated operations (the accuracy of which might have to be checked and evaluated by another artificial intelligence machine employed by the opposite party)?
Moreover, the question arguably becomes increasingly more difficult if one were to apply an objective test – how can an objective test be applied to artificial intelligence? Against what objective standard would one judge the artificial intelligence?
The starting point in determining each of the issues identified above (i.e., issues which involve an element of subjectivity or objectivity) is likely to be to personify the artificial intelligence and treat it as though it were a human being or the individual person of the insured or the insurer. The questions one has to ask are: (i) what facts and matters were within the “knowledge” of the artificial intelligence system at the time that it made the relevant decisions; (ii) what processes did it undertake in order to arrive at its ultimate decision (e.g., was a computer programming sequence or cognitive process of analysis deployed); and (iii) what caused it to ultimately act in the manner that it did?
Against this background, I proceed to address each of the issues mentioned above, in turn.
New York law on misrepresentation is governed by Insurance Law § 3105 which provides, in pertinent part, as follows:
A non-disclosure (or partial disclosure) by the insured in response to an inquiry by the insurer constitutes a misrepresentation as well as a non-disclosure. See Chicago Ins. Co. v. Kreitzer & Vogelman, 265 F. Supp. 2d 335, 343 (S.D.N.Y. 2003) (citing Mutual Benefit Life Ins. Co. v. Morley, 722 F. Supp. 1048, 1051 (S.D.N.Y. 1989) (“Morley”) (“[t]he failure to disclose is as much a misrepresentation as a false affirmative statement”)).
New York law entitles an insurer to rescind an insurance policy (which is then void ab initio) “if it was issued in reliance on material misrepresentations.” See Fid. & Guar. Ins. Underwriters, Inc. v. Jasam Realty Corp., 540 F.3d 133, 139 (2d Cir. 2008) (emphasis added); see also Interboro Ins. Co. v. Fatmir, 89 A.D.3d 993, 933 (N.Y. App. Div. 2011).
The burden of establishing the existence of a material misrepresentation is on the insurer. In order to demonstrate materiality:
In order to conceptualize the issues that might arise, let us assume the following hypothetical:
As a matter of New York law, the critical question will be whether Bright Light can satisfy the test of materiality (i.e., inducement). In other words, can Bright Light prove that it would have acted differently (e.g., by not writing the policy at all or by imposing different terms or charging a higher premium) had the true figures been disclosed and the misrepresentation had not been made in circumstances where artificial intelligence evaluated and made the decision to underwrite the risk?
In a typical Bermuda Form arbitration, the actual underwriter who underwrote the risk would give evidence as to: the documentation and information that was provided upon placement of the policy; his or her evaluation of the risk that was being written including the pricing of the risk; and whether he or she would have written the risk and, if so, on what terms. For obvious reasons set out above, this is likely impossible where artificial intelligence underwrote the risk and there was no specific human involvement.
In this event, in order to determine materiality/ inducement the following factors (which are similar to those one would consider if one were dealing with an actual underwriter) should be considered, namely:
In light of the above, although the deployment of artificial intelligence to underwrite risks may make it difficult for insurers to prove materiality/ inducement, these challenges may be overcome, and for the most part New York law is helpful in this regard. This is because, under New York law, evidence of an underwriter’s other similar underwriting practices is often required for the purposes of proving materiality/ inducement pursuant to § 3105(c) of the New York Insurance Law. Instead of involving a subjective inquiry as to what the actual underwriter would have done, the impact of artificial intelligence would principally involve a factual inquiry into what the artificial intelligence, programmed as it was, had done in the past and would have done according to that program in relation to the specific risk in question.
In this regard, insurers may face greater challenges in proving rescission under English law pursuant to which, in addition to proving that the misrepresentation or non-disclosure was material (in the same sense as under New York law), the insurer has to prove that the notional prudent insurer would have been influenced in his/her decision-making processes by the misrepresentation / non-disclosure. This is unlikely to be able to be proved other than by reference to expert evidence given by a human.
The real problem would arise if artificial intelligence had replaced all human underwriters: so that no human could give cogent evidence as to how a prudent (non-human) insurer would have reacted to the misrepresentation / non-disclosure. Will it be possible to say how a prudent artificial intelligence system should have acted? What standard should apply to artificial intelligence? Can an objective standard be applied if each artificial intelligence underwriting system is unique to each insurer?
It follows that the analysis under English law might in some respects be much more challenging.
That said, the analysis might in fact be more challenging under whichever law, New York or English, is to be applied. In considering materiality, it might be asked:
If these issues (mentioned above) had arisen at the underwriting stage, would they have required human intervention for the ultimate underwriting decision?
Let us assume that, in hypothetical 1 above, an artificial intelligence system filled out the Application Form on behalf of Rainbow but inadvertently (e.g., due to a software glitch, or error in the algorithm) omitted to report something which, while not on the Application Form, an executive officer of Rainbow knew Bright Light needed or wanted to be told if it existed.
Under New York law, rescission for pure non-disclosure requires the insurer to prove, by clear and convincing evidence, fraudulent concealment of the material facts or bad faith with intent to mislead the insurer. See Home Ins. Co. of Illinois (New Hampshire) v. Spectrum Info. Techs., Inc., 930 F. Supp. 825, 840 (E.D.N.Y. 1996).
In these circumstances, where artificial intelligence is deployed by insureds in completing Application Forms, it will be exceedingly difficult to prove actionable non-disclosure on the part of the insured and thereby rescind an insurance policy. Of course, it is conceptually possible that an insured designed an artificial intelligence system with the specific intent to deceive insurers and be selective in the information submitted to insurers as a means of concealing material information. In that event, there is no reason to suppose that a case for non-disclosure would not be made out.
The Bermuda Form Policy makes it a condition precedent to an insured’s rights under the policy that, if any of its managers or equivalent level employees of its risk management, insurance or law departments, or any of its executive officers, become aware of any occurrence “likely to involve” the policy, then the insured should “as soon as practicable” thereafter give written notice in writing and directed to the insurers’ Claims Department at its specified address (Articles V(A) and V(D)).
It is established as a matter of New York law that:
However, it has been argued that, under New York law, the inquiry is a purely subjective one. In other words, when did the relevant notifying individual of the insured in fact become aware of an occurrence and that the occurrence was one which (it believed) was likely to involve the policy.
Under the Bermuda Form Policy, the notice must be given “as soon as practicable” which means a reasonable time in all the circumstances. Under New York law, a reasonable time has been held to have been a matter of days in some cases and, in other cases, a matter of months. See Am. Ins. Co. v. Fairchild Indus., Inc., 56 F.3d 435, 440 (2d Cir. 1995) (holding that “delays for two months are routinely held unreasonable” and violated the requirement that notice be given as soon as practicable).
Since the notice provision is a condition precedent to the policy, non-compliance by the insured with it would result in a forfeiture of any of its rights to coverage under the policy. This is a reflection of the strict approach which the New York Courts have taken to “notice of occurrence” provisions in insurance contracts. See Olin Corp. v. Ins. Co. of N. Am., 743 F. Supp. 1044, 1053 (S.D.N.Y. 1990) (“Under New York law, compliance with a notice-of-occurrence provision in an insurance contract is a condition precedent to an insurer’s liability under the policy. . . . Compliance with notice-of occurrence requirements promotes important policy goals.”).
Let us assume that the facts of hypothetical 1 apply. In addition:
In this scenario, can Bright Light establish a late notice defense against Rainbow? Rainbow might argue that the artificial intelligence system was not aware in 2000 of both an occurrence and that it was likely to involve the policy. It only became so aware in 2017, at the time that it gave notice of the occurrence. Thus, it will say, notice was promptly and timely given and a defense based upon late notice cannot be made out.
However, this is unlikely to be right. Otherwise, an insured would be given a license to excuse its failure to give proper and timely notice based upon the “incompetence” or errors of the artificial intelligence system which it deploys.
The first question one has to ask is: what facts and information did the artificial intelligence have access to and thus what was it deemed to know? If it is the case that the artificial intelligence had, within its system, a repository (and thus deemed to have knowledge) of all relevant documents and information relating to the DCM Claims including the fact that $45 million in defense costs and damages had been incurred, then it might be argued that the artificial intelligence system was thus subjectively aware of both the occurrence and, if it had been programmed to consider the matter, that it was an occurrence which was likely to involve the policy. Perhaps, one has to assume that, if artificial intelligence has replaced humans, it is deemed to have the subjective awareness of a reasonable human being and that the human insured cannot hide behind the fact that the artificial intelligence system has not been designed to have all the same cognitive characteristics of humans.
However, let us assume the issue of late notice is determined by reference to a subjective-objective standard i.e., when did the artificial intelligence system become aware of an occurrence which, objectively viewed, was one which was likely to involve the policy.
Based on the facts above, it might be argued that it is sufficient that the artificial intelligence system became aware of the occurrence in 2000 because it was aware, at that time, that $45 million had been incurred in defense costs and damages (even though it might not have been programmed to have worked out that the occurrence was one which was likely to involve the policy). The reason why it would be sufficient to prove the actual knowledge of the system is because, based upon an objective analysis of the facts, one could show through expert evidence that the occurrence was one which was likely to involve the policy in the future without reference to what the system thought or did not think. Therefore, Rainbow ought to have given notice of the occurrence in 2000. The objective analysis would not depend upon any artificial intelligence system – unless, of course, human expertise is also to be replaced by machines in the administration of justice.
The occurrence definition of the Bermuda Form Policy contains the proviso that, “any actual or alleged Personal Injury or Property Damage or Advertising Liability which is expected or intended by any Insured shall not be included in any Occurrence.” (Article III(V)(2)).
Article III(L)(1) further defines the nature of expectation or intent as follows:
The requirement in (b) above (Articles III(L)(1)(b) and (c) of the Bermuda Form Policy) is subject to the further proviso that, “if actual or alleged personal injury or property damage fundamentally different in nature or at a level or rate vastly greater in order of magnitude occurs, all such actual or alleged fundamentally different in nature or vastly greater Personal injury or Property Damage shall not be deemed ‘expected or intended.’”
In order to determine whether the defense of expected or intended might apply, one must first ask whether the insured had the relevant expectation or intent.
Key inquiries that emerge are: (i) what injury or damages were expected or intended by the insured? (ii) what historical level or rate of actual or alleged injuries or damages was experienced by the insured? (iii) what level or rate of actual or alleged injuries does the insured expect or intend? (iv) what injuries or damages are “fundamentally different in nature or at a level or rate vastly greater in order or magnitude”?
The New York courts previously held that the question of expectation and intention required the application of both an objective test and a subjective test. However, the recent trend is likely towards a purely subjective test:
A subjective construction is arguably supported by the language of the occurrence definition and the nature of expectation or intent which is further defined in the Bermuda Form Policy as personal injury or damage which is “expected or intended by the insured” (Article III(L)(1)(a)), the focus arguably being on what “the insured” intended as opposed to that which it ought to have intended.
Let us assume the following facts:
The issue that arises is whether the artificial intelligence system can be deemed to have expected or intended actual or alleged injuries such that it can be argued that the level or rate of injuries actually experienced was expected or intended by the insured and thus not within the scope of coverage.
The obvious starting point is: what would the artificial intelligence be deemed to know? Unless and until it can be established what the artificial intelligence was deemed to know, one cannot proceed to ask whether it expected or intended the injuries and if so, at what level or rate.
In order to answer this question, one must ascertain what access did the artificial intelligence have, and to what information? If, as in hypothetical 3 above, the artificial intelligence system was collating and/or evaluating all relevant underwriting materials as part of the insured’s submissions, then presumably it would have had access to all documentation and information in respect of the Cherry clinical trials, post-sales reports and claims. This is especially so if the artificial intelligence system acted as a repository for all relevant pharmaceutical and medical material, historical underwriting documentation and loss information including that relating to historical claims, the severity of the claims, the costs that had been incurred and the potential future costs that might be incurred. In this event, it is arguable that the artificial intelligence system is deemed to have all of this information within its knowledge.
The next question thus presented is, assuming that the artificial intelligence was deemed to have the relevant knowledge, did it expect or intend the injuries and if so, at what level or rate?
This might be more difficult to answer and will be contingent upon whether the nature of expectation or intention is an objective or subjective test.
It might be thought that a subjective inquiry will be more difficult to satisfy for similar reasons set out above in relation to satisfying the test of materiality for misrepresentation. In other words, how can an artificial intelligence system give evidence as to what injuries it did expect or intend? However, one can assume that the artificial intelligence is (as its name suggests) “intelligent” in that it has a program which is designed to perform some level of cognitive analysis (as an individual would do so). Thus, it is conceivable that evidence could be given as to the program deployed and the analyses performed or at least capable of having been performed by the artificial intelligence system in order to establish what injuries it would or could be said to have expected or intended in respect of the Cherry Claims.
Notably, the term “level or rate” is not defined in the Bermuda Form Policy. There is debate as to what those words mean. For example, does one solely take into account injuries, or does one also take into account: the existence and number of claims, their severity, the liabilities that might flow from the injuries as well as the potential damages that have been and might be incurred. If one were to take into account these other factors, then presumably they would all be deemed to be within the knowledge of the artificial intelligence system. To this end, it might be easier to perform an analysis as to: what the system is deemed to have known in terms of past injuries and the risk of existing and future injuries, and what it may be deemed, as an intelligent system, to have expected in terms of a level or rate of injuries to which Cherry would (or could) give rise.
By contrast, let us assume that the nature of expectation or intention is to be determined by reference to an objective standard i.e., whether the injury ought to have been expected or intended by the artificial intelligence system as a substantial certainty. The issue that arises is: what objective standard should apply to the artificial intelligence system? Can one apply a test of a reasonable prudent individual, because one cannot anticipate what injuries an artificial intelligence system ought to have expected or intended given that presumably each artificial intelligence system has its own particular design and program which would dictate what information it would have access to, and thus be deemed to have knowledge of? Or is one bound to apply a test by reference to a reasonably designed artificial intelligence system? The objective standard should be that of a reasonable insured in the position of the actual insured: the de-personification of the insured by its deployment of an artificial intelligence system should not alter the basic principle or modify the application of the objective test either in its favor or against it.
It is likely that the deployment of artificial intelligence in the insurance industry will be riddled with complexities, in particular, in relation to the insurance coverage issues that might or potentially are implicated. The synopsis above is just a glimpse of some of the issues that might arise.
In the sixth century, Parmenides viewed the world as being divided into polar opposites e.g., light/ darkness, being/ non-being, warmth/ cold: one half of the opposition being positive, the other negative. For example, he viewed light as positive and darkness as negative.
In a similar vein, the questions for the insurance industry in the twenty-first century will be: whether the world will be divided by the opposition of artificial intelligence versus human intelligence; and if so, which part of that dichotomy will be positive, and which negative.
However, as Pope Benedict XVI pointed out, “[a]rtificial intelligence, in fact, is obviously an intelligence placed in equipment. It has a clear origin, in fact, in the human creators of such equipment.” Thus, perhaps after all, human intelligence will conquer and, more importantly in the insurance context, will ultimately be determinative.
[1] Artificial Intelligence, Oxford English Dictionary (2d ed. 1991).
[2] Insurance Firm to Replace Human Workers with AI System, Mainichi, December 30, 2016, https://mainichi.jp/english/articles/20161230/p2a/00m/0na/005000c (emphasis added).
[3] “Automating the Underwriting of Insurance Applications,” Kareem S. Aggour, William Cheetham (General Electric Global Research), American Association for Artificial Intelligence (2005), https://www.aaai.org/Papers/IAAI/2005/IAAI05-001.pdf.
[4] “XL Catlin considers how Artificial Intelligence can assist Risk Managers,” http://youtalk-insurance.com/news/xl-catlin/xl-catlin-considers-how-artificial-intelligence-can-assist-risk-managers?da7bfc41=618d105d.
[5] Any references to provisions of the Bermuda Form Policy are to the current XL-004 Policy.
[6] McKinney 2011.
[7] McKinney 2011.
[8] McKinney 2011.
[9] McKinney 2011.
Publication
Norton Rose Fulbright has released its 2025 Annual Litigation Trends Survey, analyzing litigation trends across the legal landscape.
Subscribe and stay up to date with the latest legal news, information and events . . .
© Norton Rose Fulbright US LLP 2025