Insurance & Technology is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Data & Analytics

07:03 AM
Connect Directly
Facebook
Google+
LinkedIn
Twitter
RSS
E-Mail
50%
50%

Insurers Scrub-Up for CRM

Before insurance companies can implement client-facing technologies, they must clean and refocus data for customer interactions.

Despite all of the advancements in information technology—the speed of processors and the space of storage devices—one thing still remains constant: Garbage In Equals Garbage Out.

Today, the adage that was first voiced by the pioneers of information technology remains true. While insurance companies have a wealth of information stored in a number of places that is used for different reasons by various departments, gathering and making sense of the data is an ever-increasing challenge—especially with today's focus on CRM. And in the insurance industry—an industry that is all about information—"garbage in" can wreak havoc on any CRM project.

"Data quality is the backbone of any CRM installation," says Ron Barker, senior principal and insurance area practice leader at Chicago-based Knightsbridge, a data warehousing solutions provider. "In fact, most insurance companies do not know they even have a data problem until it shows itself" during the implementation of CRM technology.

The quality of data is so important for CRM applications because, as Mike Morand, director of database administration at American National Insurance Co. (Galveston, TX, $11.8 billion in assets) points out, customer-facing technologies open an insurer's insides for all to view. "Data quality is a huge concern when you publish" information for customers to see, Morand says. "When you publish it, it has to be consumable."

And, as carriers begin to ready their own systems for CRM, they often do not like what they are finding. "Insurance companies have not really been concerned with the quality or cleanliness of data because where they were using it"—for analysis, underwriting and risk management—having clean data "was never that critical," says Mike Helms, insurance industry consultantfor Dayton, OH-based Teradata, a division of NCR. While insurers were not intentionally sloppy with data or were misusing it, Helms points out, there was no reason to go to the trouble of thoroughly scrubbing data that most people would never actually see. "Insurance companies collect a wealth of data," primarily during policy issuance, "but afterwards it is not highly scrutinized," he notes.

For instance, if the marketing department is running an analysis of a policyholder list to determine which demographic is most likely to buy a variable annuity, it should not care if some of the names are spelled correctly or if some of the mailing-address information is out of date. The same is true when the underwriting department runs an analysis of a database to determine what type of policyholder is ultimately a higher risk for auto insurance. However, if the insurance company wants to market variable annuities to the specific policyholders that fit into the demographic, the data must be correct.

Transaction vs. Interaction

To avoid embarrassing themselves in front of customers, insurance companies must ready their data, as American National's Morand says, "for human consumption." However, to do that carriers must fundamentally change the way that data is stored, viewed and treated.

"Most data at insurers is stored transactionally," says John Lucker, senior manager, Advanced Quantitative Services, Deloitte & Touche (New York). "The data is usually not aggregated to be looked at from a customer perspective. One of the biggest obstacles I am seeing is the data is not structured for CRM. Insurance companies really have to look at the structure of the data."

The difference between looking at the structure of the data and actually reworking it for CRM purposes is vast. "To completely overhaul the structure of the data is very difficult," Lucker adds. "Something like that would take tremendous amounts of time and money and is not an option," since overhauling the data structure would essentially require overhauling the way the legacy systems handle data. Instead, most carriers are looking to form datamarts and data warehouses to house data that front-end, customer-facing systems use for interactions, while the back-end systems continue to do what they do best-process transactions, Lucker adds.

Luckily, says John Ounjian, chief information officer at Blue Cross Blue Shield of Minnesota (Eagan, MN, $33 million in assets), his company solved the data-quality problem first. "We did the data part up front," he says. "Only after we finished the project were we able to look back and realize how important that decision was. Getting a hold of data at the beginning is something that people seriously underestimate." However, the carrier did need some help on the interaction side of the equation.

For instance, Ounjian and his team found that customers' mailing addresses differed from the mailroom database and the information in the processing systems. "The processing systems didn't have the updated addresses," he says. "The part of the company that kept the data current was the mailroom. The transaction side processing system didn't need the updated addresses.

"With CRM and insurance, you are integrating the interaction world of CRM with the transaction world" of insurance, Ounjian adds. "Throughout the years, all of the data in insurance has been oriented towards transaction processing." For instance, BCBS-MN found that all of the policyholder ZIP codes in its policy administration systems were compressed by one digit—most likely from a time when saving one digit on each policyholder ZIP code would save the carrier vital and very expensive storage space, Ounjian says.

"You can't present a compressed field to the customer," Ounjian says, adding that the ZIP code appears to the customer as an error. "That type of data is meant for systems, not humans." Also, "much of the data is formatted for experts to look at," such as claims and underwriting professionals. "The code 'P1' means a lot to an insurance expert, but it doesn't mean anything to the customer. Changing to 'interaction' data is a completely different way of doing business," he says.

CRM Speak

In fact, only after BCBS-MN took its customer-facing applications to a focus group did it realize it had to change its presentation. "As a company, our logic and vocabulary is built around transactions," he says. The customers are not "an audience that knows transaction processing. When we brought it to a focus group, they found it confusing, so we had to change it."

BCBS-MN combined its many data sources—including the mailroom database—into an Oracle (Redwood Shores, CA) relational database. The Oracle database houses the data that powers the CRM offerings, including a policyholder-facing Web site and a call center powered by Aspect Communications' (San Jose, CA) suite of products—including Aspect Contact Server, Aspect Advanced Routing, Aspect eWorkforce Management, Aspect Customer Self-Service and Aspect Web Interaction. The products allow the carrier's members to receive information via the Web, e-mail and phone. To route data systems, the carrier relies on IBM's (Armonk, NY) MQ Series.

However, even mentioning a data warehouse project may cause the hair to stand up on the necks of many senior managers. Over the years, data warehouse projects have gained the reputation of being huge projects that are never entirely completed and waste millions of IT dollars. "For the most part, data warehouses have failed because of data quality," says Tho Nguyen, director of data warehousing strategy, SAS Institute (Cary, NC). "They never seem to consider data quality. That gives the impression that data warehouses are failures," Nguyen adds. "But there are many success stories. It is all about how the project is approached."

When data warehousing became popular about a decade ago, many companies thought that creating a single source of information for data in the company was the way to go, according to Nguyen. While getting all of the data into one place is a worthy goal of any data warehouse initiative, tackling the entire project at once can be a problem.

"Eating the entire elephant in one bite does not work," Teradata's Helms says metaphorically. "I don't think that newer technology has enabled us to make it more possible to create a data warehouse."

He adds, "The approach has changed. We are not starting at the enterprise level. All of the data warehouses we are involved with start in a business unit."

But even with a smaller project that starts at the business-unit level, there still needs to be a project champion, preferably someone with some stature. "The first step is to have buy-in from a senior executive, such as the CEO, for a CRM project that moves a company from a product focus to a client focus," according to Knightsbridge's Barker.

Senior leaders should create a dedicated position to deal with data issues, advises Deloitte's Lucker. "We have recommended to every client we work with that they create a dedicated position, a data czar, if you will," he says. "Data is the core of the insurance business."

Greg MacSweeney is editorial director of InformationWeek Financial Services, whose brands include Wall Street & Technology, Bank Systems & Technology, Advanced Trading, and Insurance & Technology. View Full Bio

Previous
1 of 2
Next
Register for Insurance & Technology Newsletters
Slideshows
Video