repost from LinkedIn.
Insurance is a unique, process-oriented industry that poses a unique series of multifaceted challenges. The business requires an intricately woven web, connecting complex processes amongst a multitude of players, including consumers, brokers, local and federal agencies, and carriers. All of this is facilitated by a constant flow of information transmitted in every direction.
In order to enable the development of these systems in a timely and cost-effective manner, automation becomes an absolute necessity. Yet, there are many challenges to overcome when producing and maintaining automated systems; while the processes themselves may be constant, the underlying rules governing these processes are perpetually changing. Technical solutions must be able to adapt to frequent changes, particularly in the highly-competitive insurance market, where time-to-market can mean life or death for many insurance companies.
Current State of Application Development Landscape
Often, insurance companies choose to build in-house solutions or buy third-party products to automate their processing. Frequently, these solutions are developed as monoliths, which can be defined as singular executables or web applications handling all aspects of their business.
Once launched, these solutions can be incredibly difficult to maintain, particularly when the underlying business logic changes regularly. The entirety of the application must be rebuilt, retested, and redeployed. As a result, those changes are often collected over time and distributed through infrequent software updates, stifling their time-to-market.
Many companies have moved to the Service Oriented Architecture (SOA) approach as an alternative to the monolithic approach. In SOA, an application can be broken into components, connected through a common communication protocol, known as an Enterprise Service Bus (ESB), where each component can be maintained independently.
While the concept makes sense, the primary issue is that the practical implementation of an ESB frequently depends on sophisticated middleware, which promotes complexity in the communication mechanism rather than the endpoints (the core of the application). This stunts extensibility and reusability, effectively shifting focus to developing a robust safety net (e.g. transformations, business logic, routing) within the communications layer, rather than in the endpoint.
What are Microservices?
James Lewis and Martin Fowles describe microservices as “an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery” (http://martinfowler.com/articles/microservices.html).
Conceptually, microservices don’t differ from the Service Oriented Architecture (SOA) approach commonly used within the insurance industry; however, they aim to eliminate the complexities of the communication layer by replacing it with simple, lightweight APIs over HTTP.
The objective remains the same: to decouple portions of the larger application into cohesive, individual modules, capable of being deployed and distributed as separate applications wherein each component can be maintained independently.
Why is the Microservices Architecture Ideal for Insurance?
Monolithic applications are incredibly difficult to keep updated with a frequency that matches market demand. Updating a single portion of the application will often entail retesting and redeploying the entirety of the application. Minimizing impact, resolving issues, and deploying changes proves to be incredibly time-consuming and expensive.
The microservices approach effectively allows companies to update a single service endpoint – insofar as no interfaces change – when providing fixes and modifications. Impact is minimized by single-responsibility endpoints, meaning that development and testing efforts can be significantly diminished, improving cost-effectiveness and speed-to-market.
Development of a microservice-based application is also a decentralized effort, allowing smaller teams – internal or otherwise – to oversee individual segments of the application through their development lifecycle and through release. Third-party cloud-based services can also be integrated easily whenever applicable.
This approach can facilitate an extremely agile development process – building pieces of the application around singular business processes rather than overwhelmingly large development efforts wherein responsibilities are not clearly delimited.
One of the most complicated and challenging portions of any insurance application is the pricing of the insurance product, commonly known as rating. Insurance products are very complex, with pricing dependent on a variety of factors, calculated by sophisticated statistical algorithms. Actuaries develop and update these algorithms based on changes in a company’s claims, market share, as well as changes in the overall industry. These algorithms are used by many business operations within insurance companies, from sales to underwriting.
Rating is perhaps the prime candidate for adaptation to the microservices model. These algorithms are typically maintained by actuarial teams and conceptually distanced from other segments of the organization, which do not require the same sophisticated knowledge and apprehension of those algorithms as the actuarial teams. A rating engine effectively allows the rest of the organization to consume these service without direct comprehension of how these algorithms function.
Since rating is one of the operations that is perhaps most subject to frequent changes, microservice-based rating also reduces the necessity of frequent updates to the applications consuming this service. Unless the API itself changes, all invoking applications require no updates while the rating engine continues evolving.
The microservice approach can offer a tremendous advantages for insurance organizations willing to undertake the effort; yet, as with any new technological methodology, it is critical to apply it with caution to development efforts where it makes the most sense, both structurally and financially.