Zacarias Pereira da Mata/Shutterstock
28 November 2016News

When the wind blows

One hundred years ago Bermudians knew when a hurricane was approaching—the waves became rougher, the skies darker and the ground a lot wetter, by which time it was too late to do much apart from batten down the hatches. These days we have all kinds of early warning devices that give days to prepare.

So far in 2016 these haven’t had to be used very much, on Bermuda at least. But the Island has been lucky. As this publication was going to press, Hurricane Matthew was approaching Florida having already flattened Haiti, the storm the latest in a sudden flurry of tropical depressions and storms.

However, while the majority of people only take notice of storms that make landfall, the truth is a little more complicated if you look further out to sea.

“Actually, the 2016 Atlantic hurricane season has so far been fairly active,” says Eric Uhlhorn, principal scientist at AIR Worldwide.

The National Oceanic and Atmospheric Association (NOAA) forecast 12 to 17 named storms to form this year, and of those, five to eight are expected to become hurricanes by season’s end.

By the time this article was finished, Hurricane Hermine had become the first hurricane to make landfall in the state of Florida for a decade causing floods and storm damage. Damage estimates were in the tens of millions of dollars. Meanwhile, at the time of going to press, Hurricane Matthew was fast approaching.

"Katrina highlighted the effects of a large system-level failure, led to enhanced focus on the quality of exposure data and increased the focus on business interruption." Eric Uhlhorn, AIR Worldwide.

“The 2016 hurricane season is expected to be slightly above the long-term climatological average,” says Andrew Dhesi, vice president, analysis at Tokio Millennium Re.

“We have already observed a rather early start to the hurricane season with the first named tropical storm in January (the hurricane season starts in June and ends in November).

“There are several, interconnected factors influencing the intensity of a hurricane season which can make it generally difficult to predict. The Atlantic multidecadal oscillation (AMO) is one important factor influencing the number and strength of hurricanes. In simple terms, AMO is a series of alternating, long-duration cool and warm phases of the sea surface temperature of the North Atlantic Ocean with differences of about 1°F (0.55°C) between extremes.

“On AMO, which is influenced by many other factors, there is currently no consensus about whether we are or are not shifting into a cooler phase with fewer hurricanes.”

Dhesi points out that exactly when the AMO will switch from a warm phase to a cool phase depends for example on El Niño Southern Oscillation (ENSO)—an irregularly periodical variation in winds and sea surface temperatures over the tropical eastern Pacific Ocean with a warming phase known as El Niño and a cooling phase known as La Niña.

According to Dhesi the current forecast of a slightly negative ENSO condition, or La Niña, suggests a potential increase in North Atlantic hurricane activity. However, with the consideration of other climatological conditions it appears that the 2016 North Atlantic storm season will be close to or slightly above the long-term climatological average.

Weather watching

Climate scientists have been studying the weather in the North Atlantic very closely of late.

According to RMS meteorologist Tom Sabbatelli, what RMS is primarily looking at right now—and much of the scientific community and the insurance community is quite interested in—is whether the Atlantic hurricane basin has shifted into a prolonged period of inactivity in terms of hurricane frequency. This is because there’s been a lot of talk recently, in the last 18 months, within the scientific community about whether the prevailing atmospheric factors in the Atlantic are contributing to a prolonged inactive phase.

“There have been phases over time in the historical record in the Atlantic of maybe decades or more, when there have been high frequencies of hurricanes, above the long-term average, and there have been periods of frequency below that average,” says Sabbatelli.

“It’s been accepted by the scientific community that we’ve been in a period of high activity since 1995, not just higher hurricane activity in the Atlantic basin, but also a higher number of land-falling hurricanes across the countries around the basin.

“We’ve also had a number of years of quiet activity in the basin, since 2013 for example, so there’s also a lot of talk about what they’re calling in different circles the ‘hurricane drought’, which refers to the 11-year period since there’s been a major US hurricane and a major category 3 or above hurricane. The last major US hurricane to hit the US coastline was Hurricane Wilma in 2005.”

Sabbatelli says that this is now the longest period in recorded history since a category 3 or higher hurricane has made landfall in the US.

“We’ve been spending a lot of time wondering whether the last couple of years of inactivity are a blip or a short pause in a prolonged period of activity, or the beginning of a phase shift to a prolonged inactive period—there are a couple of different theories about that in the scientific community, but there’s really no clear consensus at the moment,” Sabbatelli says.

“The reason for that is that there are two conflicting factors. The first is the AMO, which is an index used by the scientific community that has a signature in Atlantic sea surface temperatures and sea level pressure. That’s very well correlated with periods of active and inactive behaviour: when the phase of hurricane activity shifts the AMO is likely to shift as well.

“A paper published last year suggested that the AMO has shifted for the first time since 1995 into a phase that is detrimental to hurricane activity. So there’s a suggestion that this shift in the AMO is representative of a longer-term switch to a future inactive behaviour,” he explains.

Dhesi agrees, but Sabbatelli has a caveat. “The other theory out there is that the last couple of years of low hurricane frequency have been influenced by the El Niño that’s been getting quite a bit of press over the past couple of years. 2014 and 2015 showed us the strongest El Niño on record,” he says.

“El Niño is very well correlated with vertical wind shear in the Atlantic basin, and that tends to inhibit tropical cyclone development, because it tends to tear storms apart—it doesn’t create an environment for development. So when you have a very strong El Niño it’s very likely that it could have suppressed storm activity over the last couple of years.

“So there’s still debate over whether it’s the El Niño, which has a short-term impact, or the AMO, which has a long-term impact, that’s going to be the prevailing influence on what’s been going on recently.”

If a La Niña develops, could hurricane activity intensify for 2016 and possibly 2017?

Yes, says Dhesi, who adds that if La Niña develops further during the remainder of the 2016 season and even into the 2017 season while AMO and North Atlantic oscillation (NAO) conditions remain unchanged or turn more favourable, hurricane activity over the North Atlantic should increase in both number and intensity of storms.

“From a science standpoint, models of hurricane risk are benefiting from technological advancements in observations and high-resolution computer-model simulations of storms,” says Uhlhorn.

“Hurricane observations are helping to improve models of hurricane wind structure, including maximum winds and size. New model simulations are becoming able to resolve high-frequency wind variability (ie, gusts) to better relate the actual wind to structural damage.”

Since 1900, many hurricanes have manifested with substantial, destructive, and deadly storm surges, including Hurricane Sandy in 2012, which caused unprecedented coastal flooding in New York and New Jersey.

Modelling advances

In 2015, AIR Worldwide released enhancements to its US hurricane model, including a new, hydrodynamic storm surge component which incorporated recent high-resolution land elevation, land use, and land cover data; and accounted for published studies and damage and claims data (including for new building methods and materials, building codes, and improved flood defences).

Additional enhancements include incorporating the latest observational data on the impact of square footage on wind losses for large, high-value homes and important updates that reflect the latest findings on the vulnerability of manufactured homes.

“Scientists have been working hard on improving hurricane models since the infamous hurricanes Katrina, Rita and Wilma (KRW) in 2005. These devastating storms generated large interest in the government, industry and general public. This increased attention has certainly helped the availability of research funds from both government and industry,” says Dhesi.

Hurricanes of the nature of KRW have revealed the deficiencies of the models in forecasting storm intensity. Many ensuing studies have therefore focused on understanding the physics of rapid hurricane intensification, demonstrated in events such as hurricanes Rita and Katrina.

Researchers have also tried to assimilate more data, such as satellite-driven rapid scan wind observations, surface observations, and airborne Doppler radar observations, to generate more accurate initial conditions in global climate models for hurricane genesis, track and intensity simulation.

For seasonal and multiseasonal hurricane outlooks, researchers have developed hybrid statistical-dynamical models that are forced by forecast sea surface temperature (SST) from global climate models.

While predicting AMO phase shifting is nearly impossible at least for now, researchers have found that SST may be predicted for up to four years at a 95 percent significant level.

From a cat modeling perspective, the vendor modelling firms have observed significant changes to their models over time, particularly in the wake of KRW. The near-term models introduced to the market condition the frequency of events on a heightened phase of activity. In addition, a number of changes in how sub-perils such as storm surge are modelled were introduced with more of a physical approach to the modelling of the peril versus a parametric/numerical approach. The lessons learned from these events led to further improvements to the assumed damage and vulnerability functions of structures embedded in the cat models, leveraging the vast amounts of claims data that were made available following these events.

According to Sabbatelli, the subsequent hurricane models that emerge from all of this are rigorously checked by states that might be impacted by hurricanes.

“We release a new hurricane model for the North Atlantic about every two years and there’s a biannual review of the Florida commission on hurricane loss projection methodology—that’s a team of experts convened by the state legislature of Florida to ensure that catastrophe models produced by third party vendors are projecting losses that are realistic, but also not so high that they disadvantage the Florida homeowners, and not too low that they undervalue or underestimate the amount of risk in the state of Florida, so they review each third party model quite vigorously,” Sabbatelli says.

“Once the review concludes then we submit the model for review in other states, such as Louisiana and South Carolina, which are other states that perform model reviews. Version 15 was released in March 2015, so there’s a significant amount of time that goes into the Florida commission review and time the other states need for review as well. The latest version we’re working on is Version 17 and that’s due to be released in the spring of next year.”

This year marks 11 years since Hurricane Katrina struck the US.

“Hurricane Katrina was the worst natural disaster the insurance industry has faced, with total insured payouts of more than $41 billion,” says Uhlhorn.

“One of the most powerful storms to strike the US Gulf Coast, Katrina was a landmark event that has provoked a new awareness in risk estimation and risk management, with numerous implications for the insurance and reinsurance industries—as well as for catastrophe modellers. The storm redefined the concept of a modern megadisaster, not only as a result of the physical damage from wind and water, but also because of the complexity of the insurance issues that arose, including claims handling, demand surge, and flood insurance coverage.

“Moreover, Katrina highlighted the effects of a large system-level failure, led to enhanced focus on the quality of exposure data, and increased the focus on business interruption and contingent business interruption exposures, all of which have spurred advances in the science of modelling wind and storm surge. Hurricane Katina validated the importance of modeling for improving risk management.”

2005/2006 was also known for the number of other hurricanes that hit the offshore energy market in the US because they went through the Gulf of Mexico.

Uhlhorn recalls that more than 135 hurricanes have formed or entered into the Gulf of Mexico since 1900, but that in 2004 and 2005 alone, the KRW hurricanes inflicted unprecedented damage to offshore energy assets. Together, Katrina and Rita collapsed 117 major platforms and rigs. Many minor platforms, such as freestanding caissons and well protectors, were destroyed and more than 2,000 additional platforms suffered lesser degrees of damage. Today, more than 4,500 platforms and rigs operate in the Gulf of Mexico, making it essential for companies operating in this market to have the tools that will effectively mitigate the impact of the next catastrophe.

Managing hurricane risk to offshore assets requires companies to understand four basic components of the risk: the exposure at risk, the hazard, the vulnerability of the exposed inventory, and how to appropriately model the complex policy conditions that prevail in this market.

A potential problem is that all of this happened more than a decade ago now.

Says Uhlhorn: “Industry turnover and expansion means many new people have joined the industry who may not have experienced catastrophic losses first-hand. On top of this, the widespread soft market conditions seem, from anecdotal evidence, to be reducing the time spent on assessing the underlying risk.

“Reports from some quarters indicate exposure data quality is diminishing and higher resolution exposure data sets are no longer being provided as part of the risk transfer discussion,” he concludes.