Underwriters’ Dilemma: Is AI a Cyber or Tech E&O Risk?
When it comes to artificial intelligence, more questions than answers keep underwriters from venturing into the space and lead to a lack of clarity, according to panelists at The Professional Liability Underwriting Society’s 2024 cyber symposium in New York City.
“I mean, the thing with AI is it’s not new. It’s been around since the ’50s,” said Garrett Droege, director of innovation and digital risk practice leader at IMA. “Generative AI is pretty new, but it’s still a subset of deep learning and a subset of machine learning. It’s all been with us for a long time. I think we as an industry have done a bad job at keeping pace with that, because we saw this coming 20 years ago.”
Droege, who was speaking on a panel about the intersection of cyber and tech E&O coverages, said insurers should have built models around AI and how it would likely impact the risk profile of companies when they first saw the risk coming.
“I haven’t seen, in the wild, any AI exclusions policies,” he said. “However, I talked to a lot of people in this room—chief underwriting officers—and I know they exist. They’re sitting at a desk waiting to be deployed” on cyber and tech E&O policies.
He said one element keeping insurers from moving forward is a lack of clarity about AI risks and whether they fall under a cyber or a tech E&O policy. “I mean, we do work with a lot of technology companies—a lot of emerging technology companies—and the risk is not that clear between the two,” he said. “Sometimes it’s both. What happens if the tech fails and allows unauthorized access? Well, then that’s a tech E&O claim. What happens if there’s just a brute force entry? Then it’s a cyber claim.”
The challenge for underwriters is that these differences are often subtle, said Jeff Kulikowski, executive vice president at Westfield Pro.
“Just because a company uses technology doesn’t make it a tech E&O risk,” he said. “So, we get a lot of requests—and I’m sure it comes from the client most times—to just add every coverage they can, and that’s fine. But there’s a distinct difference between a company that uses technology to perform their professional service versus a company that’s providing a technology service for others for a fee.”
He said strategic consultants, doctors, auto salespeople and accountants, for example, all use technology to perform a professional service and should have their own E&O policies. However, they’re not providing a technological service to others like a software or telecommunications company.
“It sounds easy to distinguish, but it’s really not,” he said. “You can get into some very confusing arguments around third-party administrators, whether it’s an entirely related software or it’s a service model. There are hybrid models, but I think we all can do a better job at determining what’s technology versus a technology service. It’s very, very subtle, and it takes a little digging into.”
Underwriters should be analyzing clients’ operational models to determine what their losses could be and the types of exposures that should be covered, Kulikowski said.
“While it’s certainly the client’s responsibility to analyze as much as possible, it’s also the underwriter that needs to tell the client what information is needed on their contracts, what questions around their business model differentiates them from A, B, C, or D, and to really dive in. It’s just a different level of underwriting than your standard cyber policy. We have to understand the operational risk, not from a business interruption standpoint but from an actual financial loss standpoint.”
This is especially important as these financial losses can add up, Droege said.
“If and when the shoe drops and there’s a major event that results in AI containment that people are worried about, you want to know if this would be an exclusion on a policy around the cyber or tech E&O or both,” he said. (Editor’s note: AI containment refers to limitations being placed on AI that prevent it from advancing too far.)
“I’m sure there are people in this room that are aware of recent claims involving generative AI and social engineering claims. They’re very, very costly. That’s going to be where we’re headed,” Droege said. “We’ve got to figure out what does that mean for cyber? How do we underwrite against that? Can we underwrite against that?”
Meghan McEvoy, vice president and cyber and E&O broker at Aon, said she is constantly working with clients as this risk evolves. She added it’s important to find out who the key stakeholder in a client’s organization is to make sure they’re ethically entering the AI space and what their ethics committee or privacy team is doing to keep up with the regulatory environment.
“It’s about making sure you’re keeping up with that AI regulatory environment that’s starting to expand and what you’re doing to protect data and protect your network,” she said.
Kulikowski agreed, adding that “AI has to be the most overused phrase in the industry right now,” and with good reason.
“Ten years ago, if you said AI to an underwriter, the first thing that they would think of is Skynet. So, now we’re sitting here thinking, ‘How can we underwrite to AI?'” he said. “When you dig into it, it’s not simple in any way. But I really think the big question [for clients] is not just do you use AI but how do you use it? Is it within your marketing platform? Do you use it to scrape data? Do you use it for running a trading platform or making investment decisions? Do you utilize it within your manufacturing process?” (Editor’s Note: Skynet is the fictional AI system depicted in “Terminator” films.)
While determining how AI is being used is a step toward increasing clarity around cyber and tech E&O coverage, those aren’t the only questions on insurers’ minds when it comes to AI risks.
“There are some issues with generative AI and software development where you can use ChatGPT to code for you, and most developers do that all day every day,” Droege said. “Where’s the liability line for that if the software results in a data breach? If the code was not secure and it allowed unauthorized access, but the code was written by the AI and not the company, who’s responsible? Is it the developer of the AI model? Is it the software developer?”
Another problem Droege sees with AI is an inability to determine the truth with certainty.
“For hundreds of years, we’ve been able to use our eyes and ears to verify, ‘Yeah, I saw that, I heard that, so that is true.’ Now, that’s not the case,” he said. “There are cyber events happening right now where people are using generative AI to mask live video streams and to mask audio calls.”
He said in these cases, phone or video calls can come from cyber criminals using AI to pose as a company executive and gain information or money fraudulently.
“I think you’re going to see blockchain as a bit of a savior for the AI challenges that we have, as a general ledger that is a single source of truth that can back up, and we can verify identities,” he said. “That’s going to be the way in my opinion.”
While underwriters are figuring out how to address AI, Kulikowski said one thing is certain: He doesn’t feel that it’s ready to be its own class of business quite yet.
“I feel like everybody panics about AI, and it’s something that you need to take your time and really look through the basics of what it is—specifically generative AI—and how it’s utilized within a company,” he said. “I don’t know that it’s yet its own class of business. It’s certainly an exposure class, but it’s really up to us as the industry to define the risk as it sits in every industry, and again, how it’s utilized and how we can address that.”
This means that at some point, hesitant underwriters will simply need to take the plunge despite unanswered questions, Droege said.
“There aren’t a lot of carriers raising their hands and saying, ‘Yeah, we love new risks,'” he said. “That’s the one thing we don’t do well as an industry. We want to understand it. We want to sit on the sidelines, let someone jump in the water first, and see what plans are going to come out of it. But a lot of these emerging technologies are moving so fast. Insurance is the currency of business. These companies have to have insurance to get the contracts to build the models that we’re all relying on and build things safely.”
- Survey: Majority of P/C Insurance Decision makers Say Industry Will Be Powered by AI in Future
- PE Firm Cornell Sued Over $345 Million Instant Brands Dividend
- Changing the Focus of Claims, Data When Talking About Nuclear Verdicts
- Swiss Re: Mitigating Flood Risk 10x More Cost Effective Than Rebuilding