Publication
Proposed changes to Alberta’s Freedom of Information and Protection of Privacy Act
Alberta is set to significantly change the privacy landscape for the public sector for the first time in 20 years.
Global | Publication | June 2018
The insurance industry is looking at the means to develop new business models that rely on the mining of large data sets in order to identify customers, price risk and analyse claims. Not only does the application of artificial intelligence (AI) have the potential to reduce costs by reducing headcount, it also has the potential to improve the accuracy and speed of decision-making and transform business processes. Ultimately, the benefits of AI should be significant for the insurance industry’s customers.
What do we mean by AI? In broad terms, AI is the field of computer science that includes machine learning, natural language processing, speech processing, robotics and similar automated decision-making. AI enables machines to carry out tasks that would otherwise be dependent upon grey matter. There is a spectrum of sophistication of processes and this includes making decisions of various degrees of complexity.
Applications of AI might include the use of ‘chatbots’ to assist customers with insurance applications online and guide them towards tailored products and services. Over time, chatbots can learn from each interaction to enable them to provide better and more sophisticated products as they develop in a real-world environment. Machines can hold far more information about the product and its suitability for the customer than a sales agent can. Customer on-boarding, claims documentation and customer records can be easily stored, searched and analysed with minimal human interaction. This means existing insurance approaches to customer information can be transformed to deliver a much faster and more streamlined experience, often bringing new insights into the customer’s risk profile. Through Big Data analytics, data about the customer can be sourced from a far greater number of sources and analysed with limited or no human intervention.
Connected devices or the ‘Internet of Things’ enable insurers to get accurate risk information that can help the underwriter understand the risks that their customers face with far greater depth and accuracy as they can monitor the use of the insured asset (whether a life, vehicle or property) over the period of cover. Such devices allow insurers to develop a ‘scorecard’ of customers’ risk profiles, thereby providing more accurate pricing at inception and renewal. Claims handling can be undertaken without human intervention as evidence can be scanned, assessed and paid out (or denied) by machines. Trends in claims can be identified by AI in conjunction with Big Data analytics, leading to far more accurate risk management information and more effective fraud prevention.
All of these new applications will rely on access to data. This might be customer data of varying degrees of sensitivity and might also be data gleaned from third party sources, such as social media or public records. In themselves, such data access and analysis would seem to be a good thing, as in theory they lead to better products for customers which can be purchased at a keener price.
Why, therefore, has Tesla and Space X founder Elon Musk called work on AI applications “summoning the demon”?1 Musk believes that, without regulation, some AI applications will be harmful to humanity. So, what are the risks and how might they be managed by firms in advance of prescriptive regulatory requirements?
In 2016 the UK’s Financial Conduct Authority (FCA) launched a Call for Inputs on Big Data in retail general insurance to better understand how large data sets and Big Data analytics were being used in the general retail insurance market, and to understand the risks such applications might pose to consumers. By Big Data, the FCA means the use of new and expanded datasets and data; the adoption of new technologies to generate, collect and store data; advanced data processing techniques; sophisticated analytical techniques; and the application of this data in business decisions and activities.
The feedback from the Call for Inputs identified a concern that increasing risk segmentation could be a problem alongside greater price differentiation between customers. In addition, the increasing use of third party data sources could be a cause for concern, especially in the light of recent data privacy scandals linked to social media companies.2
Risk segmentation occurs where customers who would currently be in the same risk categories are split into an increasing number of risk groups (as access to greater information enables more nuanced risk profiles to be drawn). The result is that, rather than customers sharing risks in a large pool of people, they share risks among a much smaller cohort. Customers who are identified to pose higher risks will pay a much higher premium as the burden of their risk is not shared among a sufficiently large number of people.
Furthermore, as risks are increasingly segmented, more people may be priced out of the market. This could happen for a variety of reasons: there is insufficient data available about a person or asset to enable them to benefit from pricing based on large data sets; they refuse to share certain types of personal information; or information about them (whether accurate or not) identifies them as a high risk. The opposite is also the case where those who currently struggle to find cover may benefit from the opportunity to provide more information about their risk profile (for example, young drivers who are willing to use telematics devices to demonstrate that they are careful drivers).
A real world example of risk segmentation is flood risk where postcode underwriting has left some people unable to get insurance cover for their homes; the UK Government responded by establishing Flood Re to ensure that no-one was without access to cover.
Increasing price differentiation occurs where firms are able to charge customers different premiums because of factors other than risk. Access to greater sources of data increase firms’ ability to find price sensitivities. For example, dual pricing exists where long-standing customers who do not switch are charged more than more price-sensitive customers. There is also the potential to look at customers’ data sets and infer what price (which may be different for two identical customers in terms of risk) a customer is willing to pay. Customers who enter a number of different variables into a price comparison website may be offered cheaper premiums than customers who do not. AI in conjunction with Big Data analytics enables firms to get a much better understanding of how price-sensitive a customer is and use this and other data sets to their advantage.
There are also a number of ethical and legal issues that the insurance industry will need to work through as AI is used in products and services that the insurance industries’ customers themselves offer up for use by consumers or businesses. This will include liability consideration in relation to AI-enabled products (for example, hardware or software or AI-enabled professional services such as legal services, in addition to more obvious cases such as autonomous vehicles).
Should the application of AI in insurance be regulated? If so, would such regulation be sector-specific in relation to the use of the technology in undertaking particular activities, or would there be generally applicable regulation of the technology itself, regardless of sector or specific application? There is currently no consensus on the issue globally.
From a UK perspective, in October 2017 the UK Government published recommendations from an independent review written by Professor Dame Wendy Hall and Jérôme Pesenti, Growing the Artificial Intelligence Industry in The UK.3 The recommendations did not include the creation of a national regulator for the application of AI, instead preferring that a number of organisations take responsibility for different aspects of the challenges posed by the use of AI. Among the recommendations were:
In April 2018 the House of Lords Select Committee on AI published a report, AI in the UK: Ready, Willing and Able?, in which the Committee proposed five principles that could become the basis for a shared “ethical AI framework”, and concluded that “while AI-specific regulation is not appropriate at this stage, such a framework provides clarity in the short term, and could underpin regulation, should it prove to be necessary, in the future. Existing regulators are best placed to regulate AI in their respective sectors.”
Clearly how data is processed and questions of consent are a matter for the UK Information Commissioner’s Office, which the UK Government recognises will be tasked with monitoring how personal data is used in line with its role as the body with responsibility for the protection of information rights in the UK.
The FCA has statutory authority to ensure that markets work well, with specific focus on the protection of consumers, market integrity and the promotion of competition. Where the application of AI impacts upon consumers (for example if AI makes it harder for vulnerable customers to get cover) the FCA can act to ensure that firms’ use of AI does not conflict with the obligation to treat customers fairly. Similarly, the FCA can take action against the application of AI where it results in competition concerns, for example should the concentration of data sources result in monopolies.
The UK’s Prudential Regulation Authority - which is responsible for regulating insurers from a solvency perspective – has largely been silent on how it views the risks AI might pose to the wider UK economy. It is clear that AI, if deployed in strategic thinking at board level and for capital structuring and solvency purposes (such as reinsurance purchasing), could give rise to systemic issues if it is making poor decisions humans do not understand.
What is evident is that AI will be applied in many areas of insurance for different purposes. Until guidance is produced by the various different organisations with the remit of regulating how AI is applied, firms would be well advised to develop their own internal governance structures for the use of AI in their business.
The following are some possible elements of an internal governance policy for the application of AI within insurance:
There is limited available guidance on how best to ensure that customers are treated fairly by the application of AI to insurance business models. The FCA is yet to make any recommendations. Until official guidance is published, firms should determine their own internal policies to ensure that their usage of AI is appropriate for their target market and delivers fair outcomes for customers.
Elon Musk was interviewed in October 2014 at the MIT AeroAstro Centennial Symposium.
Facebook’s Zuckerberg responds to Cambridge Analytica scandal’ (Financial Times, March 21, 2018).
Growing the artificial intelligence industry in the UK, Professor Dame Wendy Hall and Jérôme Pesenti (October 15, 2017)
An idea proposed by data scientist Cathy O’Neil in Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Penguin, 2016).
Article 29 Data Protection Working Party Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679p.14.
Publication
Alberta is set to significantly change the privacy landscape for the public sector for the first time in 20 years.
Publication
On December 15, amendments to the Competition Act (Canada) (the Act) that were intended at least in part to target competitor property controls that restrict the use of commercial real estate – specifically exclusivity clauses and restrictive covenants – came into effect.
Subscribe and stay up to date with the latest legal news, information and events . . .
© Norton Rose Fulbright LLP 2023