Adapting Claim Investigations for AI-Driven Fraud
Insurance fraud is not a new problem. What has changed is the ease with which a fraudulent claim can be made to look legitimate. Tools that once required technical skill are now free and can be used by anyone in minutes. Artificial intelligence allows claimants and organized rings to now easily create and produce fraudulent AI-generated photographs, invoices, medical records and entire identities that can pass an initial review without difficulty.
The numbers reflect how quickly this has developed. The Coalition Against Insurance Fraud estimates insurance fraud costs the U.S. economy $308.6 billion annually. Synthetic identity fraud in the financial sector has grown from roughly $8 billion in 2020 to more than $30 billion as of June 2025, according to the Reinsurance Group of America. Admiral, one of the United Kingdom’s largest motor and home insurers, reported a 71% year-over-year increase in fraud in 2025, driven in part by AI-generated evidence.
AI fraud trends have emerged rapidly within the past few years. Deepfakes were the phenomenon of 2024, but the synthetic claim has now quickly become the defining fraud trend. A synthetic claim is not just a single fabricated document. Rather, it is an entire claim assembled from believable yet fraudulent components. Fraudsters are using real data combined with fabricated information attached to a fictitious persona. Some are going as far as creating valid Social Security numbers to attach to these personas.
This is a major shift from even very recent years when it was just one fraudulent document, and that shift matters. Investigations historically began with the premise that most of the file was genuine, but that can no longer be a safe starting point. However, the new assumption runs the risk of improperly denying a claim because something “feels off,” which creates bad faith exposure for carriers. Professionals must completely change how they approach claim investigations and decisions.
Enhancing the Document Review Process
That is not to say professionals should abandon all traditional approaches. Document review still matters. However, it cannot end there. Every investigation should begin with original files, not screenshots or forwarded attachments. By the time a document reaches the claim file in that form, someone has already controlled what the carrier sees. Native files preserve the information that answers the important questions: creation dates, device data, GPS coordinates and edit history.
Once the original file is received, professionals must conduct a close examination of all documents. AI-generated images frequently contain elements that do not hold up. AI notoriously struggles with shadows falling in the wrong direction, reflections not matching the environment, and background text breaking down into unreadable characters. Those issues are easy to miss when review stays at the surface.
Even upon close examination, professionals must verify each document with its source. If an invoice or estimate is submitted, confirm it directly with the vendor. AI-generated documents frequently pair real company names with fabricated details. Employee names, phone numbers and invoice specifics are filled in with information that appears correct but is not. A quick telephone call with the vendor to verify the document’s legitimacy can save carriers from paying out fraudulent claims.
The value of native files and verification can be best illustrated by example. In a recent property claim, a contractor submitted photographs supporting an alleged hail loss. The images appeared consistent with the reported damage, and nothing raised concern on initial review. When the carrier requested original photographs, the photographs’ timestamps showed they were taken months before the alleged date of loss. After pressing the issue, the contractor ultimately confessed that he was attempting to attribute old damage to a new date of loss. This fraud would not have been uncovered without the carrier insisting on the native photographs.
That example is just one of many showing how important the original file and metadata can be. Even so, professionals must remember that metadata itself can be manipulated. Programs that alter timestamps, strip GPS data and rewrite device information are widely available and require minimal technical skill to use. Accordingly, professionals should be using additional tools such as reverse image searches, which can trace a claim photograph to prior losses, stock image libraries, or social media posts from before the alleged date of loss. A single match is often enough to show the claim is fraudulent.
Inspections, Interviews and Examinations
With the ease of creating fraudulent AI documents, professionals can no longer believe what the documents depict. They have to do more. That is where the investigation must move away from the desk. Site inspections and in-person interviews carry more weight now than ever. Professionals should seek out witnesses as oftentimes neighbors, tenants and nearby businesses have pertinent information regarding an alleged loss and are willing to talk if approached in person but will not come forward on their own or respond to calls or emails.
If upon thorough review, verification and inspection of all claim materials a professional believes there is questionable documentation or AI-generated materials may be at play, the next step should be an examination under oath. During examinations under oath, it is imperative to begin broadly. This approach allows the claimant to give a full account. Then, narrow the scope and lock in the small details. The first answer during the broad approach might help identify the issue, and then inconsistencies may develop when the broad narrative is questioned in great detail.
An AI-fabricated claim can sustain a general narrative, but it struggles with specificity. Ask about dates, times, locations, devices used and the identities of everyone involved. Ask who took the photographs, who prepared the supporting documents, and when that occurred. These answers will not only start to paint the real picture but also begin destroying the claim when paired with metadata.
Good Faith Remains Essential
While fighting fraud in the AI landscape certainly requires great detail, investigative professionals must keep legal and ethical considerations at the forefront as well. Adjusters and carriers operate under similar constraints and have good faith obligations. These obligations do not disappear just because fraud is suspected. A denial must be supported by a reasonable and thorough investigation, and not just a suspicion. When denying a claim, professionals must point to real evidence of fraud and not just a software flag.
The Bottom Line
With AI, there is exposure on both sides of this issue. Some professionals are still adjusting claims as though the documents can be trusted. Others are moving too quickly in the opposite direction and treating AI detection tools as dispositive. Both approaches carry significant risk. The first results in payment of fraudulent claims while the second can result in litigation and bad faith exposure. Thus, professionals must combine traditional investigative methods with targeted use of technology and thorough verification with both computer analysis and boots on the ground.
Fraud is not slowing down, and neither is the technology behind it. It is imperative for professionals to stay vigilant as each new trend emerges. Continue to test and verify documents and take the claims investigation from behind the desk and into the field.
McCallum, an associate attorney at Swift Currie in Atlanta, focuses her practice on automobile litigation, first-party property insurance, insurance coverage and premises liability. Email: kayla.mccallum@swiftcurrie.com.