Recent years have seen a steady march forward for artificial intelligence (AI), enabled by a proliferation of data and rapidly decreasing compute costs. This wave of innovation has occurred across all forms of AI, but it is perhaps most pronounced within the recent phenomenon of generative AI.
Generative AI and large language models (LLMs) are among the most accessible use cases of AI – anyone who has dabbled in ChatGPT understands how easy it is to sign up and use it effectively, even if only for simple tasks. It is a transformational technology with use cases that will continue to expand across industries.
We believe that AI presents significant upside for carriers and MGAs seeking to improve their approach to evaluating and pricing risks. We also think it can fuel tremendous benefits for underwriters and their teams, including improved quote-to-bind ratios, right-sizing of their business books and overall expansion.
But there are also significant risks that go beyond the potential for hallucination. We believe generative AI may present significant unintended consequences regarding how people behave. How do you balance between allowing employees to embrace the efficiencies of AI while incentivizing them to remain engaged and add their own expertise to the equation? Generative AI may have the power to automate certain tasks, but when people themselves go on “autopilot,” that’s where you can run into serious trouble.
The major generative AI platforms are based on an extremely broad set of training data, with little to no grounding in specific sectors or industries. As a result, we’ve identified limitations in the ability of these models to interpret insurance documents, and finding ways to manage and mitigate the associated risks is a major focus of our research.
Over time, we believe that smaller, more purpose-built generative AI models will become prevalent in the insurance space, particularly in more specialized areas. This reflects a dynamic we’ve seen play out for a host of other emerging technologies. A new technological leap occurs, followed by a period of intense innovation, followed by widespread adoption in more focused applications (as opposed to only general use cases). Purpose-built models are typically more accurate and less prone to hallucination than more generic LLMs, making them a natural focus for the next phase of innovation.
Because critical underwriting decisions require high accuracy, we at Insurance Quantified have no plans to transition our technology stack to rely entirely on generative AI and LLMs, especially at a time when these purpose-built offerings remain limited. Instead, we will continue to embrace their most reliable and impactful aspects while incorporating other technologies and, as always, prizing human-in-the-loop processes that enable AI to augment human decision-making, rather than supplant it.
In our opinion, this approach reflects a balanced perspective on AI – something we believe is absolutely vital to maintain amid the flurry of innovation. AI is an important element of our technology toolkit, but it’s not our solution to everything. It shouldn’t be viewed as something that will immediately change the insurance business model, but rather something that will augment current capabilities and advance the space over time. This is no doubt an exciting arena, but there’s no reason to become starry-eyed about it. Headlines along the lines of “If you’re not using generative AI, you’re already behind” should be refuted.
In conclusion companies seeking to take this innovation into their own hands must tread carefully. There are only a handful of organizations with access to the data and computing infrastructure necessary to build the most sophisticated models today, and even with the right precautions, the potential for bias or abuse is significant. More research and rigor are needed in how to evaluate and measure these effects. We believe that transparency and openness in how LLMs generate their responses will be critical to long-term adoption, as well as to addressing the safety concerns that are most relevant to the insurance industry.
By maintaining this balanced perspective and remaining clear-eyed about both the benefits and risks of generative AI, insurance firms can be responsible actors in the face of change. Ultimately, these risks will be mitigated as we move closer to AI alignment – the more confidence we have that systems are serving their intended purpose, doing what they are asked and accounting for industry-specific realities and risks, the more effectively we can use them. In future articles, we’ll dive into the AI alignment problem and how the insurance industry can move toward solving it.
Want to learn more about Insurance Quantified’s approach to AI? Drop us a line.