Editor's note: This article has been modified from its original version for clarity.
Many people who tried to ride out Hurricane Sandy on Long Island or in New Jersey remarked that they wished they had heeded the warnings of forecasters who recommended evacuation. But such sentiments were often followed by criticism of those same experts, who, in the view of these people, were apt to sensationalize weather events — most notably the previous year's tropical storm Irene. While Irene was devastating inland, its coastal effects were minimal compared to Sandy's — and residents of the most dangerous areas didn't expect what happened.
And who can blame them when, as a winter storm bears down on the area, you see headlines like Gawker's "NYC Will Get Either 3 or 30 Inches of Snow This Weekend"? The first clause of a New York Post article is "A nasty wintry mix will dump between one and possibly 20 inches of snow on the city tomorrow…"
The absurdity of these forecasts shouldn't just affect how ordinary citizens prepare for oncoming events. Insurance companies, who spend lots of money on modeling capabilities to help them establish risk, could also pause and wonder: How accurate can these things be?
In our coverage of Hurricane Andrew's 20th anniversary, catastrophe modeling pioneer Karen Clark said that years after that storm jump-started the industry, there's a danger that insurance companies are relying too much on models, and not drawing the right kind of conclusions based on the input and output.
"One thing that companies have started doing, which doesn't work, is going all the way down to individual policy underwriting, especially focusing on average annual losses for commercial books," she told me. "At that level, the uncertainty is multiple hundreds of percent. When there's a model update, you might see that risks you had classified as good risks suddenly aren't."
Clark's company later that year released a risk management platform less reliant on models. It makes sense that after 20 years of refining practices in the catastrophe modeling industry, that adjustments would be made to use the right mix of tools.
At the same time, predictive modeling has taken off in other areas of insurance big time. Granted, weather forecasting is subjected to a lot more variance than some other predictive models. And as I was told multiple times on Twitter immediately after posting the first version of this article, weather models generally do a good job.
However, Sandy — or maybe it was Irene — proved that a not insignificant subset of the populace doesn't believe that. The perceived sensationalism of weather forecasting threatens to undermine the credibility of modeling and analytics as a whole — if not among technologists, actuaries, and underwriters, with another crucial constituency: customers.
When I started this job, I had a conversation with an acquaintance who wanted to know why their poor credit score was being used as a data point in writing their auto insurance policy. They hadn't been in an accident in years, and felt their financial mistakes should not be held against them when it came to insuring the vehicle they use to get back and forth from work and try to make the money to repair those mistakes.
As big data, analytics, and predictive modeling become more ingrained in the enterprise, the kind of data that insurers use to make underwriting decisions is going to be questioned by a small but vocal constituency of consumers and regulators. Hopefully their views of the efficacy of those initiatives won't be colored by the wild and wacky world of weather forecasting and modeling. And, insurers should always keep an open mind as to which data and which models should be weighted. A lot can change in a few years.