Contact Us

Request Demo

Contact Us Request Demo
Return to Enterprise Automation Blog

Exploring the Future of GenAI in Insurance: Insights from Industry Expert Alejandro Zarate

August 13 2024

3 min read

By Priya Chakravarthi

In June 2024, Hyperscience announced Hypercell for GenAI, a product that enables enterprises to educate Large Language Models (LLMs) in the language of their business. This innovation grounds language models with precise, machine-readable data, enabling enterprises to access intricate, mission-critical documents at their core. In a recent SiliconAngle CUBE interview, our CEO Andrew Joiner and VP of HyperAI, Mayur Pillay, discussed the value that Hypercell for GenAI unlocks by automating a wide array of human-readable documents currently managed manually by BPOs. 

As I listened to this recording, I found myself asking: which industries will this technology find a good fit in? Given our recent insurance-focused webinar co-hosted with AWS, the insurance industry came to mind as a prime candidate. I reached out to Alejandro Zarate, a 20-year veteran of the industry, for his perspective.

Alejandro Zarate is Managing Director, Global Head of Data Strategy for Specialties at Marsh Insurance and Lecturer on Data, Technology and AI at Columbia University’s School of Professional Studies. Last year, he was nominated as one of The Top 25 Artificial Intelligence Consultants and Leaders of 2023 by The Consulting Report. We spoke with Alejandro about the trends he sees in the insurance industry, where traditional models outperform GenAI models, and how he envisions the evolution of verticalized models. Specifically, we asked him about the future of fine-tuning LLMs and current enterprise approaches to it.

Continue reading for key highlights from the interview, or watch the full interview in the video below.

Question (Priya Chakravarthi): Hello, Alejandro, and thank you for taking the time to speak with me today! My first question to you is: What are the major use cases that the insurance industry is automating with GenAI? What criteria are used to prioritize these use cases? In your experience, how should we evaluate whether a use case can be addressed with GenAI? I know that’s three questions in one, but they all relate to how the insurance industry perceives this new solution as applicable to its traditional challenges.

Alejandro Zarate:  Most of my students at Columbia are from the insurance industry and based on my own experience as well, these are the four use cases I see repeatedly. 

  1. The first is a typical instrumentation use case involving the summarization of claims and other insurance documents. 
  2. Another very important use case is the comparison of policies which ensures that that every policy reflects all the agreements stated in the contract. 
  3. In 2025, I see another application which is ironically traditional ML, but enabled now because we process documents, extract insight, and structure all these unstructured data, and that is recommendation engines. I think recommendation engines will make a stronger appearance after the first wave of applications where we process and aggregate a lot of data. For example, if a broker is capable of analyzing all the verbiage  in policies and structuring this data correctly, we should be able to recommend the best wording for specific risks in the future. We already do that based on our experience, but I think we are beginning to automate these tasks to help and assist brokers and professionals in becoming more efficient.  
  4. A fourth and final point I would mention is sentiment analysis. For example, in claims classification, this is another excellent application where you can leverage the capabilities of Gen AI models for sentiment analysis and apply them to text classification.

Question (Priya Chakravarthi): It looks like you are suggesting that the first wave of use cases would be somewhat simpler, and then you would build on top of it with insights and a recommendation engine, eventually leading to decision-making capabilities. Is that correct?

Alejandro Zarate: Yes, of course. We will be able to start structuring data that is currently unstructured in documents, policies, and contracts as the first step. This presents a great opportunity for the insurance industry to process policies and contracts using natural language processing or Gen AI tools. I believe 2024 will be a pivotal year. Once we have curated structured data from these documents and ensured the quality of this data, the next step is leveraging it. One of the biggest challenges for the insurance industry has been aggregating this data, which traditionally required significant manual effort. The reliance on emails and basic systems makes data capture difficult. However, using generative AI can save a lot of time and money by facilitating data extraction and aggregation from communication processes.

Question (Priya Chakravarthi): There was also a question there about how you prioritize these GenAI use cases…

Alejandro Zarate: Yeah, I think this is not very different from any other technology, right? Prioritization must align with our business strategy, but there’s nuance here. Generally, technology can be used in two ways: to generate operational efficiencies or to create new revenue streams. While we can save time and money through efficiencies like document processing and sentiment analysis, the challenge is figuring out how to make money with generative AI in insurance, which we haven’t yet cracked.

Evaluating these technologies should focus on value creation, whether through savings or additional revenue. The industry is conservative, with people carefully analyzing the benefits rather than jumping in due to fear of missing out. The primary criterion is business strategy, but it’s also crucial to consider whether your company has the right skills and competencies to execute projects. AI talent is scarce, and implementing sophisticated solutions requires a deep understanding of technology and change management. Acceptance from teams is also vital, as fears about AI replacing jobs persist. Communicating that AI is a complement to human work, not a substitute, is essential for successful adoption. These are important considerations from a high-level perspective.

Question (Priya Chakravarthi): You’ve been a strong advocate for using the right AI tool for the job  – Classic AI or Generative AI. At Hyperscience, we use proprietary AI models to read data whereas Gen AI uses models to generate data. Both these technologies have their place under the sun.  Hyperscience also acknowledges the relevance of Generative AI by supporting integration with popular publicly deployed models and customer-deployed on-premises models, even in air gapped environments. In your experience, where have you observed traditional AI technologies falter where Generative AI can step in to save the day, and vice versa?

Alejandro Zarate: This is a great question because this is where having the right talent in your organization is crucial to identify the best solutions. While tools like ChatGPT and other generative AI models are impressive, they excel in generating new content and text analysis. For tasks like text classification, traditional AI techniques, such as logistic regression, can be effective and cost-efficient if you have the right dataset. Generative AI works well in niches like text extraction and sentiment analysis, but recommendation engines, which rely on structured data and traditional models, often outperform generative AI. It’s important to have a skilled team or advisor to help determine the best approach for your specific needs, balancing performance and cost.

Question (Priya Chakravarthi): How are enterprises measuring the ROI of their investments in GenAI? One way that Hyperscience customers do this is by measuring the difference in end-to-end processing time between manual processes and automating those same processes using Hyperscience. However, platforms like Hyperscience provide metrics and reports that may not be readily available for GenAI-only applications yet. What methods are pure Gen AI applications using to measure the accuracy and automation of their tasks? 

Alejandro Zarate: Today, we spend a lot of time processing documents, so it’s natural to seek operational efficiencies with these tools. The focus should be on how much time and money can be saved, especially when dealing with expensive resources like lawyers or brokers. The real gain is in reducing processing time, such as making decisions on claims in half an hour instead of a day, which saves money and enables new business models.

Opportunities should be evaluated like any other business case. A major risk with many Generative AI startups is that they focus more on the technology than the problem they aim to solve. It’s crucial to ensure that the problem being addressed is relevant. I would evaluate any new business or revenue opportunity involving AI the same way as any other business case presented for funding.

Question (Priya Chakravarthi): In a recent interview with the SiliconAngle CUBE, Mayur Pillar, the VP of HyperAI, discusses how among the compute stack, model stack, and data stack, the data stack is the holy grail and a prerequisite for model performance. I also frequently hear the term ‘Data-Centric AI’ in the industry. If there exists a source of machine-readable and accurate enterprise data, would it facilitate insurance companies in deploying verticalized LLMs? Do you foresee the insurance industry moving towards verticalized LLMs as the financial industry is?

Alejandro Zarate: The term “data-centric AI” has gained support from institutions like Stanford and MIT. This approach emphasizes the importance of focusing on data quality rather than solely on models. Many AI systems struggle with poor data quality, which can introduce biases and discrimination, especially in sensitive fields like insurance. Data-centric AI aims to improve datasets systematically, which is crucial for training effective models. Organizations should be cautious about training models with incomplete or biased data, as this can lead to flawed outcomes.In the context of insurance, leveraging structured data can lead to more specialized language models. However, it’s unlikely that insurers will invest in creating large-scale language models due to high costs. Instead, the focus should be on using existing general-purpose models for specific tasks. Companies that can help structure and curate data effectively will play a key role in improving data quality for machine learning applications.

Question (Priya Chakravarthi): What other methods are being used to evaluate and curate LLM output to determine the success of prompt engineering or fine-tuning efforts? The hypothesis here is that these techniques will help refine your prompts and gather high-quality data for fine-tuning, thereby progressively improving the model.

Alejandro Zarate:  Prompting remains a key tool for many working with AI, often compared to a “screwdriver” for its utility. While prompting is widely used, the future lies in fine-tuning models to make them more specialized. Many applications currently rely on prompting, but there’s a growing focus on improving it through research.

You have two options: use general tools like ChatGPT for broad tasks or develop specialized tools by leveraging APIs from providers like OpenAI or Google. Creating specialized tools requires more effort and technical talent but can deliver better quality and efficiency. Going narrow and deep with specialized tools can create more value, especially in industries like insurance. This approach requires competency with APIs and technical expertise, but it leads to better standardization, risk management, and output quality.

Question (Priya Chakravarthi): What do you see as the role of the SME in the world of GenAI? With Hyperscience models, if the model lacks confidence, it seeks human assistance. How significant is Human-in-the-Loop (HiTL) in a world of supervised and semi-supervised learning?

Alejandro Zarate: In the insurance industry, there are many nuances and complexities that make it difficult for data scientists or disruptors to make significant changes without deep domain knowledge. Success in applying AI requires a solid team that includes both technical experts and subject matter experts to ensure the solutions align with industry needs. The involvement of industry experts is crucial during development and product management, especially given the industry’s evolving regulations. Understanding the subtle nuances and human aspects of the industry is essential, particularly in areas like commercial insurance, which relies heavily on relationships and knowing customer needs. It’s not just about understanding the products but also the markets and their expectations.

Question (Priya Chakravarthi): What is the future of GenAI in the insurance industry? Do you anticipate more mission-critical workloads in air gapped environments migrating faster to the cloud to leverage foundational models hosted by the hyperscalers? What other trends is this AI wave inadvertently triggering?

Alejandro Zarate: Across the industry, major insurers, brokers, and TPAs are beginning to experiment with generative AI and AI technologies. The period from 2020 to 2040 is seen as a time of experimentation and learning, with companies setting up access to models, establishing infrastructure, and moving to the cloud. This is a pivotal year for adopting technology, and there’s optimism that generative AI will enable the insurance industry to advance to more sophisticated technological uses. AI is becoming an everyday tool, with applications in document processing, content creation, sentiment analysis, and claim classification. The key is for industry leadership to commit to using technology to improve operational efficiency and create value and revenue. Overall, there’s a positive outlook on the role of AI in the industry’s future.

Question (Priya Chakravarthi): And finally, the question that most interviews usually end with: What are some of the key challenges and ethical considerations associated with using GenAI in mission-critical applications? How are insurers mitigating these challenges?

Alejandro Zarate:  Well, I think the most important challenge is bias, especially for consumer-facing applications and consumer-facing insurance. It’s crucial that teams responsible for preparing, working on, and developing these models keep this as a top priority. That’s why Data Centric AI is relevant, going back to that topic. I insist that we need to be very careful in preparing our data sets. Any data set used for model training needs to be validated, particularly for bias. Depending on the final application, you need to ensure that it’s well-balanced to minimize the risk of potential bias in terms of discrimination based on race, age, sex, or national origin. I think this is one of the biggest risks. For the industry, one of the biggest challenges is preparing by creating programs and the right organizational tools to ensure that bias is a top consideration in developing any AI solution, especially if it will impact people’s lives.

Priya Chakravarthi: Thanks Alejandro, this has been a conversation to record and remember. There is nothing like hearing it from a practitioner and an industry veteran! 

Alejandro Zarate:  It’s a pleasure, and it’s really important to keep in mind that the way the insurance industry has been around for years. This concept of how we can transfer risk and syndicate that transfer – those principles will remain because there’s something enduring about them. Our role in technology is to find the right way to support and enable that business. 

Anyone trying to use technology in this industry needs to remember that we are here to support a business model that has worked for many years. If we can reinvent it and help these organizations become more efficient and create value for customers, that’s what really makes sense. 

Not technology for technology’s sake but by falling in love with the problems!