5.3 Vulnerability data

The next step in the Stanford Urban Risk Framework does not involve collecting new geospatial data, but requires finding statistical models through literature review that attempt to answer the question: what is the relationship between the hazard intensity measure and damage to the exposed asset? For example, if the hazard intensity measure we’ve collected, in our case, is flood depth, and the exposed asset we’ve collected is building footprint, then the resultant vulnerability information we need is a way to convert flood depth to a percentage of loss value for a given building. We either will be able to find reasonable empirical information about this through research, or we won’t. In our case, we can find “depth-damage curves” produced by the U.S. Army Corps of Engineers through review of building damage in past flood events; this is in fact a key type of analysis that USACE and FEMA conduct to guide future engineering projects and policies/programs. In this page of “Economic Guidance Memoranda”, EGM 01-03 provides tables of structure depth-damage and content depth-damage for different residential building types, such as “One Story, No Basement” in Table 1. Other reports from similar sources provide similar curves for nonresidential buildings, or for freshwater vs. saltwater, or for entirely other hazards and assets. For our demonstration, as a simplifying assumption (which you could improve upon in a more in-depth analysis), we’ll just assume that all buildings in our study area experience damage based on the relationship of structure depth-damage shown in Table 1, which we’ll now manually type up in R. The negative depths shown are relative to 0 being the “first floor elevation”, which is an empirical attribute we can’t easily collect at the building level, even from the Assessor-Recorder data if we had it, so as a simplifying assumption we’ll assume that every building is raised 2ft from the actual ground level for which we have flood depths. That means that flooding at a depth of -1 ft is floodwater that may damage building crawl spaces or structure below the first floor. We could also use the standard deviations provided to run a Monte Carlo simulation, as demonstrated before, to understand the standard deviation of our final results, given the nonlinearity of this depth-damage curve, and the multiple layers of analysis overall.

library(tidyverse)
library(plotly)
vulnerability <- data.frame(
  depth = c(-2:16),
  perc_damage = c(
    0,
    0.025,
    0.134,
    0.233,
    0.321,
    0.401,
    0.471,
    0.532,
    0.586,
    0.632,
    0.672,
    0.705,
    0.732,
    0.754,
    0.772,
    0.785,
    0.795,
    0.802,
    0.807
  )
)

Our next step of analysis is straightforwardly to “look-up” percent damages for every individual building, based on its recorded estimate of depth, for each of our 15 scenarios. In effect, because of the nonlinearity of this depth-damage curve, for each building, after subtracting 2 from its depth, we will need to identify where its flood depth falls between two rows of vulnerability, then use the adjacent table information to perform linear interpolation. We could use lm(), as we’ve used before, to achieve this, but approx() can handle the job more easily:

epa_bldg_exposure <- 
  readRDS("epa_bldg_exposure.rds") %>% 
  mutate(
    avg_depth = avg_depth*0.0328084 - 2 # cm to ft, subtract first floor elevation
  )

epa_bldg_perc_damage <- 
  approx(
    x = vulnerability$depth,
    y = vulnerability$perc_damage,
    xout = epa_bldg_exposure$avg_depth
  ) %>% 
  .[2] %>% 
  as.data.frame() %>% 
  rename(perc_damage = y) %>% 
  cbind(epa_bldg_exposure)

saveRDS(epa_bldg_perc_damage,"epa_bldg_perc_damage.rds")

To demonstrate what is “happening” to buildings in an interactive plot, let’s first create a version of the tidy dataframe that includes every building for every scenario, where we fill in an average depth of -2 and a percent damage of 0 wherever we didn’t already have hazard/exposure data. expand.grid() helps us create a dataframe that has every combination of OpenStreetMap ID, sea level rise, and return period, which we then join our exposure data to, leaving NAs under SLR and RP for all empty records. We then promptly fill those NAs in with our placeholder data. This will help the data properly animate from “scene” to “scene”.

epa_bldg_perc_damage_plot <- 
  expand.grid(
    osm_id = unique(epa_bldg_perc_damage$osm_id),
    SLR = unique(epa_bldg_perc_damage$SLR),
    RP = unique(epa_bldg_perc_damage$RP)
  ) %>% 
  left_join(epa_bldg_perc_damage) %>% 
  mutate(
    avg_depth = ifelse(
      is.na(avg_depth),
      -2,
      avg_depth
    ),
    perc_damage = ifelse(
      is.na(perc_damage),
      0,
      perc_damage
    )
  )

Here’s the animation created through plotly, demonstrating what kind of interactivity is possible when advancing from ggplot to a web-based package.

epa_plot <- 
  plot_ly() %>% 
  add_trace(
    data = 
      epa_bldg_perc_damage_plot %>% 
        filter(RP == "100") %>% 
        mutate(SLR = SLR %>% as.numeric()),
    x = ~avg_depth,
    y = ~perc_damage,
    frame = ~SLR,
    type = 'scatter',
    mode = 'markers',
    marker = list(
      color = 'rgba(17, 157, 255, 0.01)',
      size = 15
    ),
    showlegend = F
  ) %>% 
  add_trace(
    data = vulnerability,
    x = ~depth,
    y = ~perc_damage,
    type = 'scatter',
    mode = 'markers',
    marker = list(
      color = 'rgb(0,0,0)'
    ),
    showlegend = F
  ) %>% 
  layout(
    xaxis = list(
      title = "Average Flood Depth",
      zeroline = FALSE
    ),
    yaxis = list(
      title = "Percent Damage"
    ),
    title = "East Palo Alto building damage during<br>100-year storm, by base sea level rise"
  ) %>% 
  config(displayModeBar = F)
epa_plot