Skip to main content
Share Dataverse

Share this dataverse on your favorite social media networks.

The 100-year floodplain serves as the primary communicator of flood risk, but this delineation and its corresponding maps have been shown to be inadequate indicators of flood risk and poor predictors of flood damage, especially in urban areas. To be fair, most 100-year floodplain maps were never intended to convey flood risk: the boundaries of the floodplain were drawn to set insurance rates. They do not and never were designed to convey information related to depth or duration of inundation, water flow, historical damage, or susceptibility to pluvial flooding. In addition, the current flood maps do not inform investments in flood control and other infrastructure investments that positively or negatively influence urban flood risk. In short, decisions based off these maps are working on limited information often resulting in misguided efforts to increase flood adaptation and resilience. Although traditional hydrologic and hydraulic (H&H) models such as HEC-RAS could be used to provide more accurate flood hazard information, they are limited in terms of their computational loads, time to execute, and expense making them infeasible to run over large areas especially in regions with limited H&H data. To address this limitation this study uses a machine learning (ML) algorithm to estimate parcel level flood hazard along the southeastern Texas coast using a long-term record of parcel level historical flood damage. The purpose was to create improved flood hazard maps that not only better captures where flooding may occur, but to also enhance risk communication. The rationale for creating these new flood risk maps is not to replace the existing FEMA regulatory floodplain, but to compliment it in such a way that increases the ability of decision makers and residents to make decisions that increase their flood resilience.
Featured Dataverses

In order to use this feature you must have at least one published dataverse.

Publish Dataverse

Are you sure you want to publish your dataverse? Once you do so it must remain published.

Publish Dataverse

This dataverse cannot be published because the dataverse it is in has not been published.

Delete Dataverse

Are you sure you want to delete your dataverse? You cannot undelete this dataverse.

Find Advanced Search

1 to 9 of 9 Results
May 13, 2020
William Mobley, 2020, "Flood Hazard Modeling Output", https://doi.org/10.18738/T8/FVJFSW, Texas Data Repository Dataverse, V2
The results from a flood hazard study using the Random Forest Classification method to predict the probability of flooding at 30-m resolution for a 30,523 km2 area. We generate flood hazard maps for twelve USGS 8-digit watersheds along the coast in southeast Texas.
May 4, 2020
William Mobley, 2020, "Replication Data for: Flood Hazard Modeling, TWI", https://doi.org/10.18738/T8/85LBLA, Texas Data Repository Dataverse, V1
TWI is calculated by the following equation: TWI= Ln ((flow_accumulation * 900) + 1 )/(Tan((slope*π)/180 )) Where high values of TWI are associated with areas that are concave, low gradient areas where water often accumulates and pools making them more vulnerable to floods.
May 4, 2020
William Mobley, 2020, "Replication Data for: Flood Hazard Modeling, Average Roughness", https://doi.org/10.18738/T8/5ZQMXV, Texas Data Repository Dataverse, V1
Roughness values were assigned to each NLCD land cover class using the values suggested by Kalyanapu (2009), and, like KSAT, was averaged across the contributing upstream area for each raster cell for 2016. Kalyanapu, A.J., Burian, S.J. & McPherson, T.N., 2010. Effect of land use...
May 4, 2020
William Mobley, 2020, "Replication Data for: Flood Hazard Modeling, Distance to Coast", https://doi.org/10.18738/T8/YYWO9P, Texas Data Repository Dataverse, V1
Distance to coast were calculated using Euclidean Distance based on the National Hydrography Dataset (NHD) stream and coastline features.
May 4, 2020
William Mobley, 2020, "Replication Data for: Flood Hazard Modeling, Average Ksat", https://doi.org/10.18738/T8/UKAGX0, Texas Data Repository Dataverse, V1
KSAT values were assigned to soil classes obtained from the Natural Resources Conservation Service’s (NRCS) Soil Service Geographic Database (SSURGO) , and then averaged across the upstream area for each cell. (2020-04-28)
May 4, 2020
William Mobley, 2020, "Replication Data for: Flood Hazard Modeling, Elevation", https://doi.org/10.18738/T8/BPARY8, Texas Data Repository Dataverse, V1
Elevation was calculated using the National Elevation Dataset (NED), which was provided as a seamless raster product via the LANDFIRE website at a 30-m resolution. For more information on the initial dataset see : https://www.landfire.gov/elevation.php
May 4, 2020
William Mobley, 2020, "Replication Data for: Flood Hazard Modeling, Distance to Stream", https://doi.org/10.18738/T8/VLIS6E, Texas Data Repository Dataverse, V1
Distance to Stream was calculated using Euclidean Distance based on the National Hydrography Dataset (NHD) stream and coastline features.
May 4, 2020
William Mobley, 2020, "Replication Data for: Flood Hazard Modeling, Flow Accumulation", https://doi.org/10.18738/T8/QRXYB7, Texas Data Repository Dataverse, V1
Flow accumulation measures the total upstream area that flows into every raster cell based on a flow direction network as determined by the NED.
May 4, 2020
William Mobley, 2020, "Replication Data for: Flood Hazard Modeling, Impervious", https://doi.org/10.18738/T8/S7NFPI, Texas Data Repository Dataverse, V1
Percent impervious was measured using the percent developed impervious surface raster from the National Land Cover Database (NLCD).
Add Data

Log in to create a dataverse or add a dataset.

Link Dataverse
Reset Modifications

Are you sure you want to reset the selected metadata fields? If you do this, any customizations (hidden, required, optional) you have done will no longer appear.

Contact Texas Data Repository Dataverse TDL Dataverse Support

Texas Data Repository Dataverse TDL Dataverse Support

Please fill this out to prove you are not a robot.

+ =