March 25, 2014

Data has always played a critical role in insurance. Still, there is something different about what is happening today. Organizations are now amassing mountains of data. Silos are replicating data, plus there's new data and new types of data being created daily. For many insurance companies managing all of the data is becoming impractical. Traditional data management strategies will not scale to effectively govern large data for high-performance analytics. The most common obstacle for companies is that they have too much data and too few resources.

But the real challenge begins when companies start extracting meaningful insights from this explosion of data. Fortunately, the science of extracting insight from data is constantly evolving. Tools are more readily available as insurance companies begin to invest in the technology that supports big data.

Stuart Rose
Stuart Rose, SAS

For a growing number of organizations, the answer has been to take advantage of distributed processing technologies such as the Hadoop file system (HDFS). Hadoop is an open-source software framework for running applications on a large cluster of commodity hardware. Since Hadoop runs on commodity hardware that scales out easily and quickly, organizations are now able to store and archive a lot more data at a much lower cost. This is good news for IT, but it should also be music to the business professional's ears. No longer does data need to be destroyed after its regulatory life to save on storage costs. No longer does the business analyst or data scientist need to limit his data analysis to the last three, five or seven years.

[Previously from Rose: 3 Insurance Business Applications for Text Analytics]

Another advantage of capturing data in Hadoop is that it can be stored in its raw, native state. It does not need to be formatted upfront as with traditional, structured data stores; it can be formatted at the time of the data request.

What do organizations hope to derive from the increased volumes of data they collect? The end goal depends very much on the insurance company, the market conditions and the strategic imperatives of a given carrier. Organizations want more business value from big data, and analytics is an important route to value. Hence the growth in the term "big data analytics."

Getting insights out of Hadoop and big data in a timely manner requires a different approach. Insurers need in-memory analytics. In-memory analytics analyzes data in RAM rather than on disk, avoiding time-consuming I/O. This approach reduces the data bottleneck while maximizing computing power that can be applied to the analysis. This enables in-memory processing to be thousands of times faster than traditional analytical processing.

Big data technologies are relatively new and still maturing. However, combining the power of analytics with distributed processing technologies like Hadoop will help insurance companies transform big data to make better business decisions, faster. It will enable statisticians, actuaries and business users to examine and analyze more complex problems than ever before. The ability to quickly analyze big data can redefine so many important business functions, such as risk calculation, price optimization, catastrophe modeling, fraud detection and customer experience. It's hard to imagine any forward-looking company that is not considering its big data strategy, regardless of actual data volume.

Insurers have long seen data as a source of competitive advantage. But data alone is worthless; it is insights derived from the data that matter. With the emergence of big data, the possibility for deriving insights is increasing dramatically. Data can be the difference between success and failure. Better data leads to better decisions, which ultimately leads to more profitable business. Today, the return on information is just as important as the return on investment!

About the Author: Stuart Rose is global insurance marketing manager at Cary, N.C.-based SAS. Rose, a 20-year veteran of the insurance industry, began his career as an actuary. He has worked for a global insurance carrier in both its life and property divisions and has worked for several software vendors, where he was responsible for marketing, product management and application development. He has driven successful development and implementation of enterprise systems with insurance companies in the U.S., the U.K., South Africa and Continental Europe.

[To hear about how insurance companies and financial firms are managing their complex data architectures, attend the Future of the Financial Services Data Center panel at Interop 2014 in Las Vegas, March 31-April 4.
You can also REGISTER FOR INTEROP HERE.]