The Claims Industry’s AI Problem Isn’t Adoption. It’s Design.

March 31, 2026 by

Ask most insurers what their AI strategy is, and they’ll tell you about efficiency: faster document processing, reduced manual review time, lower cost per claim. Those are real and worthy goals. But I think the industry is solving for the wrong thing; the consequences show up in a number that doesn’t get talked about enough.

Only 16% of claims professionals report medium or high trust in AI-generated outputs. Just 2% say high trust.

That’s not a technology adoption problem. It’s a design problem. And fixing it requires the industry to be honest about what claims work actually is.

The work that gets buried under the work

Claims professionals spend 40–60% of their time organizing, reading, and searching through documents before they make a single decision on a claim. As volumes climb and cases grow more complex, that ratio gets harder to sustain. But the time loss isn’t really the problem. The problem is what happens to the people doing this work.

The best claims professionals I’ve met are remarkably skilled at reading people, understanding context, negotiating fairly, and building trust with claimants in some of the most stressful moments of their lives. That’s the actual job. What they’re not supposed to be is document librarians. The more time they spend buried in files, the less time they have for the work they’re actually trained for.

We started calling this the empathy gap. It’s not a technology problem. It’s a human one. And until AI is designed to close it, rather than just speed up the document work, the industry will keep missing the point.

Why most claims AI hasn’t earned trust

Most claims professionals don’t fully trust AI-generated outputs, and for good reason.

Our own research showed that only 16% report medium or high trust in AI outputs, with just 2% reporting high trust. When I share that number with people outside the industry, they’re often surprised. When I share it with claims professionals, they nod.

The distrust is grounded in real experience. Many organizations have tried to wrap a generic large language model around their claims workflows and gotten inconsistent, indefensible results. In claims, that’s not a minor inconvenience. These decisions affect real people: claimants waiting for resolution, carriers managing reserves, legal teams preparing for litigation. Accuracy isn’t optional.

What makes this worse is that medical record reviews are genuinely sensitive. Clinical language, treatment standards, regulatory context: these aren’t things a generalist model handles well. The industry needs AI that was trained on this domain, not adapted to it after the fact. There’s a meaningful difference, and claims professionals feel it immediately.

The question the industry keeps getting wrong

Here’s where I think a fundamental error gets made: too many organizations frame AI adoption as a question of whether to automate, rather than what to automate.

My belief, shaped by years of watching how claims teams actually work, is that the right division of labor is clear. AI is excellent at the things humans find tedious and error-prone: ingesting and classifying large document volumes, deduplicating records, organizing medical timelines, flagging inconsistencies, surfacing risk signals. We don’t need people to do that anymore.

But we absolutely need humans to make decisions with that structured data. The AI should surface what matters. The human decides what it means, what action to take, and how to treat the person on the other side of the claim. That’s not a limitation of AI; it’s the right design. Human oversight isn’t a fallback. It’s the point.

This distinction matters enormously for trust. When a claims professional can click through every recommendation and understand exactly how the system arrived at that conclusion (when everything is auditable and traceable), they engage with the output differently. They verify instead of dismiss. That’s how AI earns its place in a claims workflow.

What the industry actually needs to get there

The shift the industry needs isn’t faster document processing. It’s AI that gets claims professionals to a decision: with confidence, with context, and with a defensible audit trail.

That means moving beyond AI medical record summarization and organization, as valuable as those are, toward systems that surface the right risk signals at the right moment: treatment inconsistencies, standard-of-care deviations, litigation indicators, coverage questions. Not as a replacement for the adjuster’s judgment, but as the structured foundation that judgment needs to work from.

It also means the industry needs to stop treating AI as a bolt-on. The organizations seeing real results aren’t deploying AI as a layer on top of existing workflows. They’re rethinking the workflow itself, asking not just ‘how do we make this faster’ but ‘how do we make sure the right outcome is reached as quickly as possible.’ That framing produces a very different product.

We’re at the early stages of what insurance-specific AI can actually do. The capabilities available today look nothing like what existed three years ago, and the gap will continue to widen. The organizations that invest now in AI built for this domain, trained on clinical language, legal standards, and the edge cases that define complex claims, will be in a fundamentally different position than those still wrapping general-purpose models around a specialized problem.

A note on what we’re building

At Wisedocs, this thinking shapes everything. Our new Claims Decision Intelligence platform is built around exactly these principles: get claims professionals to the decision earlier, give them full visibility into how conclusions were reached, and keep the human doing the work only humans should do. It’s the product we wish had existed when our customers first started asking us not just for faster documents, but for better decisions.