While insurers' initial interest in cloud-based solutions was mainly about the financial benefits of this model, increasingly the cloud is enabling insurance companies to quickly gain competitive advantage in crowded markets ripe for disruption. This has been the case at Oregon Mutual Insurance, which this week received an SMA Innovation in Action award for its adoption of a cloud-based personal lines auto rating system from ClarionDoor. The implementation allowed Oregon Mutual to eliminate dependencies on comparative raters' manufactured rates and screen scraping, as well as a legacy mainframe-based in-house rating system. At the same time, the new solution has improved speed to market and accuracy, and is giving agents and underwriters information they never had before about quoting and rating behaviors.
From proof of concept to delivery of the solution to all five states in which Oregon Mutual writes personal lines auto was four-and-a-half months, according to the insurer's vice president and CIO Bryan Fowler. The goal? "We wanted to get control over our own destiny: speed to market, accuracy, transparency, and much deeper data for analytics and insights," according to Fowler. He spoke recently with I&T about the initiative:
What was the problem related to comparative rating that Oregon Mutual wanted to address?
Fowler: Like many small to midsize companies, our legacy technology and capability situation was that our rates, presented by comparative raters, were either being "manufactured" by the comparative raters themselves, or through "screen scraping" of Oregon Mutual screens. We were completely dependent on the comparative raters being able to respond to OMI priorities, requirements, and timing. We were also completely reliant on their quality of delivery and maintenance. We had virtually no control of our own destiny and rating quality on the agent desktop.
In addition to this, we had very little visibility into the accuracy of the rates being delivered. On more than a few occasions, we discovered -- sometimes well after it started -- that rates were being calculated incorrectly, most often because a comparative rater had made a change to their code that we didn't know about, which resulted in our rates being incorrect. We lost quite a few opportunities because of this. The project was called "Rating Independence" for a reason.
Finally, we had virtually no visibility into quoting activity or details. Only when an agent would bridge from comparative rater to our portal did we know that we'd been quoted. We had no insights into the characteristics of the quoting going on, when it was happening, where, and by what agents. We couldn't even see critical data about ratios of quoting activity to bridging activity. It was a paralyzing blindness.