adesso Blog

Approaches to integrating modern AI applications with existing insurance landscapes in a stable and secure manner

How can existing insurance applications be connected to chatbots and other AI solutions in a stable and future-oriented manner? I will show you how new applications for the insurance processes of the future can exploit the full potential of existing data and knowledge within the company. The goal: an improved customer experience and more efficient processes.

With open, API-driven integration patterns and standardized protocols, existing IT resources can be combined with new agents in a flexible, secure, and compliant manner. A structured target architecture with AI-based components connected via the Message Context Protocol (MCP), the targeted use of agents and Retrieval Augmented Generation (RAG), and a consistent service orientation (REST/events) for accessing structured data ensure decoupling, reuse, and end-to-end traceability.

Non-functional requirements for traceability, verifiability, and security are met by:

  • consistent governance for models and data,
  • complete observability of all resources,
  • access control through the use of Identity Access Management (IAM),
  • and role-based access control (RBAC).

Every access is authorized (“zero trust”). Classification of all data, structured or unstructured, and the principle of data minimization ensure compliance with data protection regulations.

Audit trails (logging and control of AI usage through prompts and outputs) and policy enforcement are used for governance.

This results in robust, release-ready AI solutions that securely extend existing core systems and deliver measurable, rapid added value.

How can today's insurance IT become fully AI-capable?

Insurers are under pressure to improve service quality, reduce costs, and comply with regulatory requirements at the same time. Businesses and customers expect modern user journeys, up-to-date use cases, and short time-to-market for new AI features. AI-powered chatbots, assistants for clerks, and intelligent automation in processes address these goals. AI applications with a connection to existing data are

  • Customer self-service: combines general terms and conditions, rate information, and individual contract data
  • Internal expert assistance: context- and prompt-based compilation of internal guidelines, emails, expert opinions, or court decisions
  • Sales support: Comparison of customer requirements with product descriptions, consulting guidelines, and internal specifications
  • Claims & benefits: Summary of large case files

How can these AI applications be cleanly integrated into the existing application landscape? Insurers' systems are often monolithic and have grown historically. They are increasingly being modernized through the introduction of API-oriented platforms.

A decoupled, API-driven insurance platform

The core idea of a future-proof, AI-enabled insurance landscape is decoupling in the target architecture, so that existing business logic and data storage are provided via services and AI applications are clearly separated and linked via standardized APIs.

The AI front-end components and the integration of the LLM are arranged in the conversation layer. These include customer chatbots, assistants, recommendation services, tools for data analysis and extraction, and decision engines. They are connected to the LLM via backend and support components.

MCP as an abstraction layer

The Model Context Protocol (MCP) is a standardized protocol that can be used to connect to the insurance application landscape. Business functions are exposed as clearly defined MCP tools that are used by the AI models. An MCP gateway acts as an abstraction layer between the LLM/chatbot and the backend capabilities. When changing models (different provider, on-premise model), the integration of data and tools can be retained and “only” the “model adapter” needs to be changed. This reduces dependence on the specific model and its provider (no “model lock-in”).

Technically correct, up-to-date, and audit-proof responses can be achieved via agents in the knowledge layer that are connected to the models.

Access to structured data (via REST APIs or events) takes place “behind” MCP. Technical services provide stable APIs for the “systems of records,” while AI services act as loosely coupled consumers of these APIs and can scale independently.

Security and compliance requirements are consistently implemented: MCP tool calls run via IAM, API gateway, and policy engine, data is classified and minimized before model use, and all interactions are auditable. At the same time, the tools remain independent of the specific model or channel and can be reused by customer chatbots, internal assistants, and other AI applications.

Knowledge base: Retrieval-Augmented Generation (RAG)

Agents in a Retrieval Augmented Generation provide answers based on indexed documents and information without “burning” data into the model. A RAG agent provides the Large Language Model with the necessary information or text passages and uses

  • Product/tariff documents
  • General Terms and Conditions, conditions, guidelines, process manuals
  • knowledge databases, FAQs, training materials
  • contextual contract/claims documents

When documents are indexed and made available via vector and metadata search, each answer can be traced back to its sources. The indexed documents form the “single source of truth,” which can be edited and made available in an editorial process.

Microservices, APIs, and events

AI applications and chatbots often require specific responses in dialogues via synchronous service calls. When modern insurance applications and platforms offer specialized microservices, the business logic is encapsulated and offered in a domain-oriented manner. The API gateway provides routing, authentication, and transformation when accessing the structured data. RESTful APIs are also connected in the MCP tools. Stable, versioned MCP interfaces decouple the AI applications from the platform. The use of the services is stabilized by error handling; in the event of a malfunction or failure, end users are informed (“We will resubmit the request and contact you by email”).

Event-driven integration (events/streams) as an asynchronous communication channel is used, for example, to trigger follow-up processing (application submission, contract changes).

Governance

Governance in the AI environment has become more important compared to previous application systems. It ensures that AI initiatives remain manageable in the long term—technically, professionally, and regulatorily—by regulating the handling of the models used, the use of the necessary data, the evaluation of the prompts used, and the control of the outputs generated.

In addition to the model catalog (which model is used for which use cases?), model governance also includes lifecycle management of AI solutions with quality gates, release processes, and continuous improvement, as well as observability for monitoring use and explainability for documentation and traceability of statements.

Data governance regulates the handling of data in the context of data protection and regulatory requirements. Data protection is ideally pursued “by design & by default”:

Data classification
  • How is the data classified (public, internal, confidential, particularly sensitive)?
  • Who is responsible (ownership)?
  • Who is allowed to see it (access rights)?
Data lineage
  • Which data is used in which models with which outputs?
Data minimization
  • Which data is absolutely necessary for the specific use case?

Prompt governance reduces unwanted responses, data leaks, and misinformation in generative AI systems by controlling the prompts used. This includes, for example, the use of predefined prompt templates from a prompt catalog. An audit trail is created by logging the prompts with the user context and the responses generated.

Output governance checks the responses in a post-processing step. Sensitive or confidential data is only passed on with authorization. Legally sensitive statements require review and approval. In case of uncertainty or complex queries, the response is forwarded to a human administrator.

Security and zero trust

An AI-enabled insurance architecture must take zero trust seriously: no system and no user is trusted by default; every connection and every access is explicitly checked.

Token-based identity and access management is used for authentication and authorization for employees and partners, and if necessary also for end customers. Permissions (RBAC, ABAC = Attribute-Based Access Control if necessary) restrict access to data and resources in the MCP tools and REST APIs. MCP gateway, RAG agents, and the API gateway help control what the AI and users are allowed to do. Chatbots and AI services work with an “acting on behalf of” principle, so that a bot only does what the respective user is allowed to do. An analogous authorization check is also performed when RAG accesses documents.

The gateways perform additional policy checks and serve as central points for logging, auditing, and traceability. Encryption and network security are further elementary functions.

Data collection for operation and control (observability)

AI services increase the complexity of operating and controlling the IT landscape. The collection and evaluation of data and key figures (observability) for availability, costs, and quality are essential. MCP and API gateways provide connections to standardized protocol functions and log every access with a request, response, or error. Monitoring dashboards or end-to-end tracing (with correlations across service boundaries) support operation and error analysis. Audit trails collect information for traceability and optimization of AI functions. In addition to information about the user and the model, the responses generated and the quality achieved are logged. Additional metrics such as success rate, automation rate, abandonment rate, or processing and response times support the analysis of business value.

Conclusion: Secure AI integration as a competitive advantage

Connecting modern insurance platforms to chatbots and AI solutions is no longer an experimental fringe topic. Open, API-driven architectures, clear integration patterns, strong security and governance concepts, and end-to-end observability enable the controlled, stable, and scalable use of AI.

Those who implement a decoupled, zero-trust-enabled target architecture now are laying the foundation for fast, release-ready AI solutions and thus for the measurable added value of modern insurance applications of the future.


Our offer

  • We transform your core systems into a service platform
  • We implement security, governance, and stable operation for an MCP integration layer
  • We support you in designing new and customized AI applications that realize modern use cases using the existing application landscape.

Learn more


Picture Volker Mull

Author Dr. Volker Mull

Dr Volker Mull has been working for adesso as Principal Software Architect since 2023. He has been supporting insurers in their transformations for more than 25 years and is intensively involved in the integration of platform-based solutions into the overall context of insurance companies.

Category:

Industries

Tags:

Insurance



Our blog posts at a glance

Our tech blog invites you to dive deep into the exciting dimensions of technology. Here we offer you insights not only into our vision and expertise, but also into the latest trends, developments and ideas shaping the tech world.

Our blog is your platform for inspiring stories, informative articles and practical insights. Whether you are a tech lover, an entrepreneur looking for innovative solutions or just curious - we have something for everyone.

To the blog posts