Working at a commercial insurance carrier, you probably have to walk a tightrope when gathering the risk data you need to make underwriting decisions. Do you stay in-house and get high-quality data but spend a lot of time and money, leading to missed opportunities and a higher expense ratio? On the other hand, should you outsource to get data faster but then insure customers you wish you wouldn’t have because the BPO gave you poor quality risk data? Or do you pursue something else?
Groundspeed takes you off of that tightrope and puts you on stable ground. In this blog post, you’ll learn how our Artificial Intelligence with Human in the Loop system helps commercial insurance carriers get the data they need at 98% accuracy, quickly, and at low cost.
What do underwriters need from their document processing system?
- Net total incurred
- Insured value for an exposure
- The street addresses for insured locations
- Accurate classification of the policyholder
- COPE data for property submissions
Those are just a few data points that commercial insurance carriers need to consider when judging whether to do business with a potential insured and how to price that potential insured’s policies.
What makes or breaks a carrier’s ability to win as much business as possible while maintaining a low combined ratio is how that carrier gets those data points. Carriers must balance a set of competing needs while poring over hundreds and thousands of submissions. The most important needs are:
- Capturing as many relevant data points about the prospective insured as possible
- Maximizing data capture accuracy
- Minimizing time from when the submission is received to when underwriters can use data points from the submission in their models
- Minimizing cost spent capturing data
Balancing data coverage, accuracy, speed, and cost
Capturing more data points, ensuring high accuracy, minimizing time, and minimizing costs can easily be at odds with one another. For example, optimizing for higher speed and accuracy can lead to higher costs or fewer data points available for each prospective insured. So let’s explore a few ways commercial insurance carriers can extract data from documents in a submission and see which tradeoffs carriers are forced to make.
In this option, insurance carriers have one of their team members read and enter data from documents they receive. While this means data points are captured correctly, it takes a long time – especially when backlogs of unprocessed documents build up – and gives underwriters fewer data points than they need to make well-informed decisions. In addition, simply extracting that data is costly, even before you factor in the IT budget required to transform the raw text that was captured into usable data for carriers’ rating systems.
Contracting with a Business Process Outsourcing firm
Carriers hire a Business Process Outsourcing firm (BPO) to read documents and send the data from those documents to the carrier’s underwriting teams. Staff at BPOs have less experience than underwriters and may be under high time pressure because of their business model, so they produce lower quality data. As a result, they may not extract all the data points that underwriters want. As a result, speed and costs will likely be lower than in-house processing, though the carrier will still need to shoulder the IT cost of putting BPO-provided data into a usable format.
Introducing some automation
An insurance carrier can also automate document processing work by using a program that reads data points from some documents and sends those data points to underwriting staff. This program speeds up and lowers the cost of the set of automated documents, but these automations often have some downsides. The most salient are:
- Lower accuracy
- Missed data points
- A limited set of document automations to work on
- The expertise needed for improving and expanding automation coverage
Artificial Intelligence with a Human in the Loop
This is a powerful tool that commercial insurance carriers can use to get the high-accuracy data they need in minimal time, with costs comparable to legacy systems. So let’s explore this idea further.
What is Human in the Loop?
Human in the Loop (HITL) is a method that helps Artificial Intelligence systems function more accurately and quickly and cover a broader range of data. In commercial insurance terms, HITL enables more accurate, complete, and faster extraction of key data points from submission documents. It also means that data extraction gets ever more automated, fast, and accurate over time.
It starts with Artificial Intelligence making predictions about what it sees. One example prediction could be the Nature of Injury value of a given claim in a commercial insurance document. That prediction has a confidence level, or how sure the AI is that it read the correct value. Think of what you ate for breakfast today compared to what you ate for breakfast exactly one month ago; you’ll likely have a higher level of confidence that your memory of your breakfast for today is correct compared to the level of confidence about your long-ago meal.
When Artificial Intelligence makes a prediction with low confidence and sends that prediction to a human for review, that’s a Human in the Loop approach. Those low-confidence predictions happen even with the best Artificial Intelligence tools. Having a human cover those low-confidence points is the best way to avoid either using a data point that could be incorrect or failing to capture that data point altogether.
Overall, Human in the Loop enables organizations to get the best of both worlds when it comes to document processing: the speed, reliability, and cost savings of automation with the high data quality that comes from the human ability to discern details and read between the lines which Artificial Intelligence, in many circumstances, doesn’t yet have.
How Groundspeed uses HITL to ensure high data coverage, accuracy, and speed at a low cost
Human in the Loop processes are the core of how Groundspeed extracts and delivers data to its customers with 98% accuracy. We use Human in the Loop at multiple stages of our data extraction pipeline to capture all relevant data points from documents submitted. There is a second layer of data quality assurance we’ve built into our pipeline too: Groundspeed audits a sample of Artificial Intelligence predictions and Human in the Loop corrections every day.
We use those internal audits as one input for the constant improvements we make to our software tools and HITL processes. The audit findings help us improve Artificial Intelligence tools and fine-tune our HITL standard operating procedures, while corrections made by HITL operators provide training data inputs for new and existing Artificial Intelligence programs, and automated metrics about HITL operations help us refine our HITL software. Data captured by both the AI and Human in the Loop are also mapped and transformed to insurance standards to reduce systems integration and IT investment by carriers.
All of this means that Groundspeed’s Artificial Intelligence with Human in the Loop system is ready to go and doesn’t require extensive training or setup for new customers. With Groundspeed, underwriters get high-quality, extensive data in little time, at a comparable price to legacy systems without needing to make any tradeoffs.
How Groundspeed can help you
Are you interested in learning more about how Groundspeed’s Human in the Loop AI platform pulls data from loss run documents, ACORD applications, exposure schedules, and policy forms? Then, schedule a call with our team today or request a demo here. Let us partner with your company and help you unlock the value in your unstructured data.
This blog was written by:
Bryan Quandt – Bryan is the Product Manager for Groundspeed’s internal software applications. He plans the development of automations and Human in the Loop tools that get Groundspeed’s customers their data as accurately, quickly, and inexpensively as possible.