Friday, October 06, 2017 by Isabelle Z.
The 100-year flood plain maps created by the Federal Emergency Management Agency (FEMA) carry a lot of weight. They are used by insurance companies to predict risk when setting policy terms and premiums, and federal and state officials use them to plan for floods. They must be reasonably accurate to be given such powerful votes of confidence, right?
Not so fast. A new study that examined a decade of data found that FEMA’s flood maps failed to capture three-fourths of the flood damages inflicted by five serious floods. None of the floods studied, which occurred between 1999 and 2009, reached the threshold for a 100-year event.
The study was carried out by land use experts and hydrologists from Texas A&M University at Galveston and Rice University and published in Natural Hazards Review. Their timing couldn’t be better as flooding in Texas was thrust into the spotlight just a few days later in the aftermath of Hurricane Harvey in Houston.
Study co-author Antonia Sebastian said the main takeaway from the study is that many losses take place outside the FEMA 100-year flood plains. Lead author Russell Blessing believes that they can do better by making use of some of the innovate hydrological and computational tools that are now available and capable of creating more predictive maps.
For their study, they looked at the Armand Bayou watershed, which covers 60 square miles in the southeastern part of Harris County. This area includes parts of Houston, Deer Park, Taylor Lake Village, La Porte, Pasadena and some unincorporated areas. The five major rain events that took place during the time and area under study were two rainstorms that led to flooding in 2006 and 2009, 2001’s Tropical Storm Allison, 2007’s Tropical Storm Erin, and Hurricane Ike in 2008.
When FEMA talks about 100-year flood events, they are referring to a flood that has a 100-year return interval. This means that there is a one percent chance – or 1 out of 100 – that the event is going to occur in a particular year; a 50-year event, therefore, has a two percent chance of taking place each year. When FEMA focuses on 100-year events, they are overlooking the fact that rainfall events that fail to reach this threshold are still capable of causing severe flooding.
The researchers cited several problems with FEMA’s 100-year flood plain maps, the first of which is the fact that they work on the assumption that flooding is only going to take place in one dimension – upstream or downstream – rather than perpendicular to the channel. This is inaccurate in low-lying areas like the Armand Bayou, where flood rains can flow in any direction depending on how high they get.
Another problem is the fact that FEMA’s models use just one classification for an entire neighborhood or series of neighborhoods, which discounts the important role played by soil types, like sand versus clay, or the way the land is used.
The researchers suggested that approaches like probabilistic flood plain mapping and distributed hydrology modeling could glean better predictions of flood risk and damages. However, it seems unlikely that they will change their approach, even though these results are very alarming.
FEMA doesn’t have the best reputation when it comes to competence. In 2011, they made headlines when they asked thousands of Americans who suffered natural disasters to give back more than $22 million they shelled out in government aid, saying they mistakenly paid money to ineligible parties. At the time, they said their employees misunderstood the eligibility rules and made errors in accounting. They also came under fire for putting displaced people up in formaldehyde-laden trailers after Hurricanes Rita and Katrina.
Missing three out of every four claims is an abysmal record, and it’s hard to believe that our tax dollars are funding this incompetent agency. Would your boss continue to pay you if you got your job wrong 75 percent of the time?