Ready or Not, 80% of P/C Insurers Aim to Use AI for Biz Decisions This Year

March 26, 2024 by

Two-thirds of property/casualty insurers plan to start using AI in operational decisions this year, according to a survey, which also reveals that nearly all insurers already using AI have been tripped up by bias challenges.

The Ethical AI in Insurance Consortium, which commissioned the survey of 250 P/C insurance professionals involved in actuarial, data science, underwriting, claims and AI/transformation functions, highlighted the rapid jump from just 14 percent of carriers using AI today to the 66 percent planning to use AI for “inline business decisions” in 2024 in a media statement and survey report.

The term “inline business decisions” refers to the strategic choices made within an insurance company’s operations to effectively manage risks, optimize profitability, enhance customer satisfaction and ensure regulatory compliance, a Consortium representative said. These decisions are typically made as part of the day-to-day operations including underwriting, pricing, claims, risk management and compliance.

The jump in the potential use of AI for these decisions will bring the percentage either using or planning to use AI up to 80 percent, said the Consortium, a 17-member group that aims to drive positive change in the insurance industry by advocating for the ethical use of data and for the responsible and transparent adoption of AI for decision management.

According to the survey, carrier IT, sales and marketing departments are those most likely to be using AI today, but straight-through processing is on deck for 2024, setting the stage for automated underwriting and claims processing.

“The majority of the insurance companies that have implemented AI have already seen bias in their algorithms,” noted Rob Clark, founder and chief executive officer of Cloverleaf Analytics, a founding member of the Consortium. “It’s not something that we’re pontificating could possibly happen. It’s something that is happening, and it’s happening on a regular basis,” he said during a recent Consortium webinar. Clark was referring to the fact that 97 percent of the survey respondents who said they were already using AI have encountered challenges related to bias.

While the survey report focuses on the go-forward plans of insurers, efficiencies gained so far and concerns about bias, Clark and other the Consortium members participating in the webinar went beyond survey results to present more of a how-to guide, offering tips to P/C carriers on how to begin an AI journey and opinions on best practices, including the need to continually monitor data and business biases going into AI algorithms along with model results.

Getting Ready

“Many of you are not feeling prepared to properly implement AI,” said Jen Linton, CEO of Fenris Digital and moderator of the webinar, addressing carriers in the audience and taking a cue from the finding that only 22 percent of survey respondents said they felt prepared to do what most said they are set to do this year—implement AI for making inline business decisions. She asked panelists to offer their assessments of carrier readiness.

About the Survey

To get more insight into the state of AI in insurance, the Ethical AI in Insurance Consortium commissioned a survey of 250 insurance professionals to shed light on their companies’ plans, priorities and most pressing challenges.

In addition to questions discussed in the accompanying article, carriers were asked about the types of improvements they have seen from AI adoption (efficiency, growth, profitability, customer satisfaction, etc.), who should regulate for bias (independent bodies, state insurance departments, federal government), who should govern ethics internally, levels of concerns about costs, data, and stakeholder resistance.

The survey was administered online during December 2023 by Global Survey Research, an independent global research firm, and included responses from actuarial, data science, underwriting, claims and AI/transformation professionals in P/C insurance companies with 100 or more employees and direct written premium of more than $100 million.

Although the moderator of a webinar conducted in conjunction with the release of the survey described the respondents as mid-to-large insurers, only a small percentage were giant insurers—with 4 percent having more than 10,000 people and just 6 percent writing more than $5 billion in premiums. Forty-two percent write between $501 million and $1 billion, and another 26 percent write between $1 billion and $5 billion.

[/sidebar]

Doug Benalan, chief information officer of CURE Insurance, identified organizational culture, data quality and talent as areas where carriers may not be fully ready for rapid AI transformation. “The AI transformation is a very collaborative approach, across the organization, of various domains and disciplines. It is not just technology alone,” he said, referring to the cultural component.

“It’s not something you just put in. There has to be a culture within the organization,” Cloverleaf’s Clark agreed, adding that data quality, management and oversight are also important. “It’s not a set-it-and-forget-it [exercise]. You really need to stay on top of your AI models once you’ve put them in place. It’s a constant evolution over time.”

Speaking from the experience of implementing AI for claims management at CURE, Benalan noted that carrier data may be segmented among multiple systems across an organization. “Data quality, cleanup and mapping between the system would be additional complexities,” he said.

Addressing a lack of talent, CURE supplemented with its in-house development along with the Cloverleaf Analytics team, he reported.

Abby Hosseini, chief digital officer of Exavalu, which offers digital advisory and technology services to insurers, suggested that carriers are ready for experimentation. AI has been around the insurance industry for 20-25 years, he said, referring to AI point solutions. “Recent innovations have made it possible to combine multiple AI disciplines to really leverage abundance of data and the opportunity to change the operating models that we apply to underwriting and claims.”

“AI is not complicated. You’ve been using it in everyday life in many areas—from spam control and speech recognition to ICR/OCR scanning of documents. The difference this time is the power of combining multiple disciplines of AI to create new possibilities. And just like a drone combines mechanical engineering, aeronautics, industrial IoT knowledge and machine vision together to create this powerful device, new advancements in AI can significantly change the way you operate—the way you adjudicate claims or underwrite. These advancements can combine to fundamentally change the operating model,” stated Hosseini, who is also a former CIO for Mercury Insurance Group.

However, just like there are limitations on the use of drones, “using AI for your business ethically and responsibly will require guardrails that are designed to make the progress go faster rather than to put brakes on it,” he said. “I would focus on coaching your teams to what they shouldn’t be doing, and let them experiment a little bit,” he said. Also, “keep track of what you are doing and document all your decisions,” he said, referring to guardrails being set by insurance regulators, which were discussed by another panelist, Paige Waters, a partner for law firm Locke Lord.

Waters noted that even in states that haven’t adopted the NAIC’s Model Bulletin on AI or haven’t put in place their own specific regulations (as Colorado has for life insurers), regulators will be looking to the insurance companies to maintain the appropriate AI governance, auditing and vendor management.

“There are existing statutes out there already. So, to the extent that insurance regulators come in and they review something in a market conduct exam or some type of data call, there are existing statutes out there that govern unfair and deceptive trade practices, model governance disclosures, unfair and deceptive claim practices. All of those existing statutes and regulations still apply,” she said. That means regulators “are going to want to see the written policies and procedures surrounding AI governance. They’re going to want to see that insurance companies are keeping written inventory of all of the AI that’s being used. [And] insurance regulators are also going to hold the insurance companies responsible for the AI use of their third-party vendors and third-party contractors,” she said.

“Document, document, document,” she advised, referring to carrier AI models, adding that third party-vendor contracts need to incorporate certain requirements for auditing and documentation as well.

Clark recommended that carriers use any guidance from regulators as a starting point for AI governance and risk management. “That is a minimum,” he said. Take those and actually build on top of them to continually monitor, review, audit and compare outcomes.

Read more about AI regulation in these Carrier Management articles:

[/sidebar]

Squashing Bias

Pointing to the diverse perspectives of the panelists, Linton turned to the problem of bias. “This is usually a very key element to try to squash or eliminate or not introduce bias—to start from a lot of different perspectives, and do that from the top down,” she said.

Later in the webinar, Benalan talked about different types of diversity to include in planning and review teams, at one point noting that having PIP, collision and other coverage experts involved for an AI auto claims management is a type of diversity. At another point, he spoke about having teams of data scientists and engineers supplemented with business team partners, available to monitor and audit results, and correct models as needed.

Hosseini stressed the need continuously monitor AI programs for bias creeping in over time. “Model drift and the external world will change the data, the underlying assumptions in your algorithms. You have to constantly be on top of reviewing these algorithms,” he said.

Benalan identified three sources of bias: data, algorithms and business bias. The algorithms are “programmed by humans, right? As humans, our bias is shaped by how we perceive based on our environments and experiences. AI adapts these experiences in the form of data selected or provided by humans,” he said.

He explained the reference to “business bias,” saying that he sees this as “how the business acts upon the results of the AI” to make business decisions. “The AI might be designed to set a higher premiums for certain customers based on factors that are not directly related to their driving risk such as their education, occupation, credit score, etc. This could result in a higher premiums for certain groups of people even though they are very safe drivers [and] lower premiums for certain groups of people even though they are high-risk drivers,” he said.

Clark offered Google’s AI image engine as an example of the human impact on AI, noting that Google recently decided to take it down because it was heavily biased to produce the types of images deemed appropriate “in the eyes of the developers.” In this case, according to Google’s account, human developers were attempting to make sure the images reflected the diversity of the world’s people and would not reveal any gender, race or ethnic bias. But the result was the creation of historical images that put people of color and women in situations that involved only white men, according to various news report and critical posts on the X platform.

“As you build [AI models], you have your own bias. So, you can actually inadvertently put that bias into the technology. And then the result will be biased based on whether it’s your opinion or your business’ perspective—and not necessarily what’s going on in the marketplace, or [for] your insureds and how they’re perceiving information,” Clark said.

Benefits and Best Practices

The survey results and webinar presenters also revealed information on the benefits of AI adoption, with Benalan, for example, reporting that CURE’s AI model drove a 60-70 percent increase in productivity for the claims management team. In addition, according to the Consortium’s carrier survey, 68 percent of the carriers already using AI for inline business decisions are satisfied with the return on investment.

About EAIC

The Ethical AI in Insurance Consortium aims to foster responsible and transparent adoption of artificial intelligence for decision management in the insurance sector, bringing together insurance carriers, technology and solutions companies, regulators, and other key influencers.

Core objectives include the creation of ethical technology development and operations guidelines; advocacy for insurers and the insured; collaboration and knowledge-sharing; and standardization.

In addition to releasing the results of a survey of carriers earlier this month, the EAIC published a Code of Ethics, 12 principles including addressing transparency, bias minimization, third-party audits and standardized disclosure of AI use “to serve as a navigational chart for insurers and InsurTechs.”

[/sidebar]

Coaxed by Linton to share some tips on how to start an AI journey and best practices, Benalan reiterated the need for an organizational culture with a “very positive and proactive attitude toward AI adoption.” He continued: “That means you know there is a continuous improvement or continuous learning. You have the ecosystem for all your resources to learn and benefit out of that… And your business and operation teams always have to collaborate…The technology alone cannot pull it through.”

Benalan also described a brainstorming process at CURE that segmented use cases into two streams—AI making direct decisions to the end customers, or AI providing data to business, where the businesses make decisions based on AI output. CURE ultimately implemented an AI model that informs the claims team. “Before we run, we want to make some baby steps to build the AI framework and the necessary ecosystems so that our team can learn and get mature before driving toward the complex solutions.”

Similar to Benalan, Hosseini distinguished between use cases where AI supervises humans, and those where humans supervise AI. “In insurance, for the foreseeable future, we are more in [the situation] of augmenting human decisions with AI, and empowering employees to lower their cognitive load to accelerate accurate decision making,” he said, describing copilots that lower burdens and increase confidence in using AI.

“Whether it’s underwriting questions or claims determinations or ratemaking, [AI] doesn’t stand alone,” Clark agreed later.

Hosseini also suggested that carriers can use AI to retrospectively review, say, the last 60 days of claims or last 60 days of underwriting. If AI finds exceptions or lack of adherence to guidelines, insurers can leverage that knowledge to improve operational efficiency. “This will provide huge opportunities to lower risk and allow management to monitor front office, middle office and back office operations using AI,” he said.

Offering an example of a way to gain economic efficiencies, Clark described an AI model that would review “terabytes of data” for personal auto, and flag a driver as person for whom an MVR should be ordered.

Turning to best practices tips, Benalan said CURE is not using any socioeconomic, geographical or personally identifiable information in datasets for AI models because that could potentially result in very unfair discrimination.

Clark agreed with the approach. “Even the simple idea of a name—AI could start looking at names and start saying, OK, this particular name is higher risk, so I’m going to charge more.” Carriers may think, “it’s just a name,” but if the AI model “sees a trend with a certain name historically in the data, it may be making recommendations purely on names—and that could be a bias.”

“You want to pull that PII out, or at least mask it,” he said, also suggesting that this might be needed for street addresses in many (but not all) situations.

“Building transparency is very important,” Benalan stressed, concluding his remarks on best practices. “All the people on the team, meaning all the stakeholders, including business and operations, should be able to understand what information is behind this prediction in a non-technical way. That way, we all can agree with the underlying parameters used in this prediction before moving forward,” he said.

In conjunction with the webinar, the Consortium distributed a graphic of a strategic framework for how to use AI without bias, created by the CURE CIO. The graphic depicts a cyclical process that starts with planning, “like any other projects,” and moves to data preparation and cleanup, and then modeling. “But it does not end in the deployment. It has to have a continuous monitoring and auditing process attached to it,” Benalan said, identifying the key items depicted on the graphic that are displayed in the form of a circular flowchart.

Benalan discussed a type of audit process—reviewing claims denied based on an AI model to crosscheck it against demographic groups—when Linton asked him to describe ways to ensure that AI is fair. If certain groups are more likely to have claims denied, then there’s likely bias in the model, he said.

Hosseini offered additional recommendations, including setting up new roles like “Ethical Technology Analyst.”

Clark commended the CURE team for a best practice of getting the management on board in the beginning before even implementing AI. “This is not a thing that we’re just going to set and forget. It’s something that we have to monitor. There’s going to be resources that have to be dedicated to this, that have to work with it,” he said, explaining the need to get everyone on the same page on this from the start.

Benalan cautioned carriers about a tendency to put a lot of energy into development and implementation, and then to reduce the bandwidth of resources and people in the final auditing and monitoring phases. Carriers should expect there will be outcomes from testing that need to be addressed on the AI journey. “You should have that energy” throughout, he advised.

A survey finding along these lines was disappointing to the report’s authors, who noted that 42 percent of carriers surveyed believe that auditing for bias is only “slightly important.” Equally concerning, the survey revealed that 43 percent said they didn’t think educating employees about AI biases and ethical considerations was important.