1. Introduction
“A single number is not a big enough concept to communicate the idea of risk. It takes a whole curve.”
[1]
Across the United States, the scale of wildfire-related losses annually outpaces available resources at the disposal of federal, state, and local actors to mitigate future losses. Driven by federal policy, wildfire risk science has advanced rapidly over the past decade to inform mitigation and adaptation strategies and to support strategic allocation of resources across space and time [2,3,4,5,6,7,8]. In the United States, wildfire risk sciences have coalesced around an actuarial definition of risk, where risk is defined as both the probability of wildfire occurrence and the consequence of wildfire given that it occurs [6]. Building on that definition, quantitative wildfire risk assessments simulate wildfire occurrence, assess both negative and positive socioecological consequences, and report risk using integrated metrics that can be combined across diverse resources and values and across diverse landscapes [9]. In both wildfire response and pre-season planning settings, quantitative wildfire risk assessment outputs are used to develop optimized strategies that minimize net negative and maximize net positive impacts from wildfire [5,7,10].
While wildfire risk scientists have necessarily agreed upon shared definitions of risk, it is important to recognize that the public at large does not adhere so strictly to actuarial definitions of risk [1,11,12]. For instance, a homeowner might ask ‘what is the risk of a wildfire in my community,’ when what they are really asking is ‘what is the probability of a wildfire in my community?’ In this case, the homeowner is using the word risk to ask about what risk scientists actually call “hazard,” or the probability of a threat, but the homeowner is not asking about consequences. Divergent definitions of risk may complicate communication, but as quantitative risk assessment outputs are used in more and more decision settings by diverse audiences (i.e., emergency managers, non-fire resource professionals, community planners, etc.), it is essential to continually review and refine how we communicate risk to support decision making in different contexts [13].
Communicating and characterizing risk in the context of low-probability, high-consequence events is particularly challenging [14]. While other disciplines from financial planning to national security and even other natural hazards have developed strategies for explicitly characterizing low-probability, high-consequence events, wildfire risk assessments generally rely on mean-based statistics [13,15,16,17,18,19,20]. Mean-based statistics are inadequate for communicating the plausibility of extreme events let alone communicating the magnitude of risk [21]. Explicitly characterizing outlier events is particularly important with respect to wildfires because it is precisely those fires which have disproportionate socioecological consequences [22,23,24,25,26]. Not only do outlier events have disproportionate impacts, they are also society’s most fecund opportunity for novel learning in complex systems and subsequent adaptation planning [27,28,29].
Arguably, most if not all wildfire impacts are the result of a disproportionately small number of fires, but this is perhaps especially true in landscapes vulnerable to infrequent, but very intense wildfires. For instance, fire return intervals in forests west of the Cascade Range crest in Oregon and Washington, USA (“Westside”), regularly exceed 200 years and annual burn probabilities are commonly estimated to be less than 0.0001 [30,31,32,33]. At the same time, a handful of fires over the past 120 years have demonstrated that when fires eventually occur, the consequences can be extreme [23,34]. Most notably, a spate of synchronous Westside fires in 2020 burned over 300,000 ha, causing the evacuation of nearly 100,000 people, killing five, and resulting in several billion dollars of damage. The 2020 wildfires were often described as “unprecedented” when in fact they were generally characteristic of fires that have impacted the region during the past several centuries, thereby demonstrating the challenge of communicating risk in a landscape driven by very rare events [30,35,36].
One hazard of risk, then, is that depending on the definition and methods used to communicate risk, risk assessments may point end-users towards supposedly rational solutions that might not be so rational under a different definition of risk [37]. On the Westside specifically, risk assessments may point decision makers towards optimized risk reduction strategies that are highly vulnerable to the types of infrequent, extreme events that are characteristic of the region. Westside communities are rarely represented, named, or ranked in community wildfire risk and exposure reports and papers drawing on mean-based metrics, giving the impression that Westside communities are either not at risk at all, or that the risk is miniscule, and resources should be allocated elsewhere. Typically communities within higher-frequency fire regimes are emphasized in reports and maps [38,39]. Yet, 75% of the population in Oregon and Washington live in Westside communities along with all of the associated essential infrastructure and services. While wildfire may be an unlikely annual occurrence, the potential consequences and concerns around wildfire in these areas demands a more nuanced approach to understanding and communicating risk.
A second hazard of risk as it is so often presented in risk assessments is that integrated, unitless metrics are not easily translated outside the context of the risk assessment itself. Unitless metrics are designed to compare and integrate risk across diverse resources and assets so that, for example, the risk to communities and the risk to wildlife are expressed on the same unitless scale (−100–100) and can be combined to calculate a single, comprehensive risk value [6,9]. When operating with an optimization mindset, integrated risk is useful, but to the community planner who wants to know how many homes might be lost during an extreme event, a risk of −75 is not insightful. Risk assessments are used in increasingly diverse decision settings and methods are needed to tailor output to communicate risk for audiences that are using a non-actuarial lens. Wildfire exposure analysis, instead of risk analysis, does not require integrated metrics and therefore may be an effective way to communicate the plausibility of rare events to broad audiences [40].
Rather than rely solely on mean-based and integrated metrics, risk communication in low-frequency fire regimes would benefit from surprise analysis [41]. Surprises are unforeseen, rare, and highly impactful events, and surprise analysis strives to identify potential events that have not otherwise been characterized and to communicate their potential consequences. Surprises are not always calamitous events, but in the case of Westside fire, there is an obvious interest in anticipating potential future disasters in terms of damage to communities. The potential benefit of surprise analysis is that by identifying these events before they happen, we have an opportunity to identify vulnerabilities and adapt without actually having to experience the negative consequences of a disaster. Incorporating surprise analysis into the risk assessment process is particularly useful in low-frequency fire regimes, but may also be useful in socioecologically similar settings such as Patagonia and New Zealand, or even in temperate and boreal forests, where extreme fires are becoming more common [42,43].
Often, potential surprises are identified using statistical analyses of rare event distributions, but in Westside landscapes, where the empirical fire record is limited by the very nature of the fire regime, statistical methods may be insufficient [21,25,44,45]. In which case, simulations provide an opportunity to investigate plausible surprises. In particular, Monte Carlo-style wildfire simulators produce thousands of iterations of plausible event scenarios and hundreds of thousands of simulated wildfires, many of which presumably illustrate plausible Westside surprises [46,47]. The authors in [41] used simulations to investigate plausible future surprises in western Oregon that might arise as a result of climate change, but to the authors’ knowledge, no studies have investigated plausible contemporary surprises.
In order to demonstrate the utility of surprise analyses in low-frequency fire regimes we used wildfire simulation outputs that were from an existing assessment and building location footprint dataset to identify plausible wildfire disasters in Westside communities [31,48]. We also compared simulated disasters to historical Westside fires to evaluate the relevance of the simulation data when characterizing infrequent fires, as well as to extract lessons from the simulated results. Specifically, we addressed the following questions: (1) what were the magnitudes and sizes of simulated disasters and how did they compare to historical events; (2) Which communities have experienced historical exposure, and which communities are vulnerable to plausible future disasters; (3) What is the source of simulated community disaster exposure; and (4) How does maximum simulated exposure compare to mean annual building exposure and worst-case scenario-integrated risk metrics? We anticipated that the simulations would illustrate novel disasters in terms of location and magnitude compared to historical events. Further, we anticipated that using maximum community exposure would illustrate unique spatial risk distributions among Westside communities compared to either mean annual exposure, or integrated worst-case scenario risk. Our aim is to demonstrate that non-actuarial characterizations of risk provide additional information that is useful to managers and planners in any fire prone landscape, but particularly so in low-frequency fire regimes.
2. Materials and Methods
2.1. Study Area
The study area (Figure 1) is predominantly the region west of the Cascade Range crest (“Westside”) in Oregon and Washington, USA, covering approximately 12.6 million hectares. The Cascade Range crest, running north to south from Washington to Oregon, and in many places rising above 3000 m, plays an enormous role in shaping PNW climate, generally separating temperate maritime conditions on its west side from the arid, high desert to its east. The study area comprises multiple pyrome given in the national pyrome dataset, also used by the Pacific Northwest Quantitative Wildfire Risk Assessment [31,49]. Pyromes are ecoregion polygons closely aligned with Level III ecoregions [50] but adjusted to reflect fire regimes and, in some cases, fire management jurisdictions. The study area extends to forested areas east side of Cascade Range crest in some cases, to account for fires that ignite east of the crest but are transmitted across the crest. Ecoregions of southwestern Oregon were not included owing to their different climate and fire characteristics, which are typically aligned with more frequent fire regimes.
The region is characterized by a temperate maritime climate influenced strongly by topography. Annual precipitation ranges between approx. 150–500 cm, the highest amounts falling in temperate rain forests in the Olympic Peninsula and along the coast. Most precipitation falls between October and April as snow at higher elevations and rain below. Summers are generally very dry, although fog is common in on the coast [51] and rainstorms occur in the west Cascades [33]. Maximum summer temperatures range between approx. 20–38 °C. Historic Westside fire occurrence has been closely linked to periods of short-term drought in late summer and fall [52]. Particularly disastrous Westside fires appear to be the result of drought, synoptic east winds, and ignition location [53].
Due to the mild and wet climate, Westside forests are exceptionally productive. While generally characterized by mixed-moist conifer forests, potential vegetation types follow approximate elevation gradients. Much of the region between the Coast Range and Cascade Range below ~ 1000 m is in the western hemlock (Tsuga heterophylla) vegetation zone, but Douglas fir (Pseudotsuga menziesii var. menziesii), is the most common extant species [23,33,54]. Higher elevations in the Cascades are in the Pacific silver fir (Abies amabilis) and mountain hemlock (Tsuga mertensiana), zones. Coastal forests in Oregon and in the Olympic Peninsula are temperate rainforests dominated by Sitka spruce (Picea sitchensis) zone. Forest structure and composition have been heavily influenced by a legacy of and ongoing intensive forest management [55]. National Forests cover approximately 3.5 million hectares, nearly 30% of the study area, mostly at higher elevations in the west Cascades but also including much of the Olympic Peninsula. National Forests are managed for multiple use objectives, but commercial timber harvests have been significantly reduced over the past three decades. Private industrial timber management is common at mid-elevations in the west Cascades and throughout the Coast Range, where silvicultural prescriptions are dominated by clear cut methods. Lower elevations in the Willamette Valley and Puget Trough are dominated by private, non-industrial management including agriculture. Approximately 70% of the PNW population live in the Westside, predominantly in the Seattle, WA (3.8 million people) and Portland, OR (2.7 million people) metro areas.
2.2. Historical Wildfire Data
Historical building exposure was calculated using two historical fire datasets, collectively representing 1984–2020. We used fire perimeters from Monitoring Trends in Burn Severity (MTBS) for fires in the period 1984–2018 [56]. MTBS includes all incidents ≥405 hectares; we excluded prescribed fires from our analysis. In addition, we included fire perimeters from 2019 to 2020 available from the National Interagency Fire Center (NIFC) [57,58]. NIFC records are not limited by size like the MTBS records. NIFC archives include multiple features for each fire representing the fire over time; for each fire we used the most recent feature, assuming that doing so would be the best estimate of final fire size. Fires from both MTBS and NIFC were included in the historic dataset if any portion of the fire intersected the Westside study area (n = 66, Figure 1B). We assume that collectively this historic dataset includes nearly all exposure events from 1984 to 2020 but recognize that because of limitations in each of the data sources, we may not have accounted for all historical exposure.
2.3. Simulated Wildfire Data
We analyzed output from wildfire simulations that were conducted as part of the 2018 Pacific Northwest Quantitative Wildfire Risk Assessment (QWRA) [31]. Simulations were performed using the FSim Large Fire Simulator which has been widely used for local, state, regional, and national fire planning [46,59,60,61,62]. FSim has been described in detail elsewhere [46]. Therefore, we provide only its key features. FSim is a Monte Carlo simulation that produces tens of thousands of iterations of a statistically plausible fire season [46]. FSim is calibrated based on relationships between Energy Release Component (ERC) and historical large fire occurrence [63]. Using modules for weather generation, ignition, fire growth and suppression, FSim simulates daily fire scenarios across tens of thousands of fire seasons with statistically plausible but variable daily weather scenarios, and stores spatially explicit final perimeters for each fire as well as the ignition location [64]. QWRA simulations were based on contemporary climate from 1992 to 2012; vegetation and fuel conditions were based on 2014 LANDFIRE data layers but were updated to account for post-2014 disturbances and based on local knowledge from fire and natural resource managers; and recent historic fire occurrence data were primarily drawn from the national Fire Occurrence Dataset which includes all ignitions 1992–2015 but again updated to include fires in the period 2015–2017 [31,65,66,67,68,69,70,71,72,73]. QWRA simulations were conducted for 23 contiguous model domains across all of Oregon and Washington at 120 m resolution. Our simulated fire dataset includes all fires that intersected the Westside study area (n = 507,539). We elected to use simulations from the QWRA because the model was carefully updated and calibrated with insight from regional fire personnel and because it is the most recent, available risk assessment for the area. The information and data in the QWRA are also referenced and used widely among planners and managers across Oregon and Washington and our analysis will provide a useful complement to those applications.
2.4. Building Exposure
Exposure was looked at in two related ways. First, we determined the per-fire exposure for each simulated and historical fire by intersecting fire perimeters with building footprint data which represents building locations identified using satellite imagery from 2015 [48]. We used only building centroids that fell within the study area, so for fires that burned across the study area boundary and may have exposed buildings both inside and outside of the study area, we counted only buildings exposed within the study area. We further classified any historical or simulated wildfire that exposed more ≥100 buildings as a “disaster.” The threshold is based on literature related to empirical building loss analyses, but our methods measure wildfire exposure only rather than consequences (i.e., extent of building damage) [3,74]. For that reason, “disasters” identified in this study are best interpreted as potential disasters in this study. We visually evaluated the relationship between fire size and exposure magnitude in order to determine how many exposure events were disasters and how many disasters were the result of very large fires (≥20,234 ha).
Second, we identified the maximum simulated and historical community exposure events for each of 646 communities. We used community definitions and boundaries developed by which are based on census-designated communities but which are also expanded to include rural, often unincorporated, development based on GIS-determined drive-time analysis [75]. The authors in [39] also used this community dataset in an exposure analysis, allowing us to compare results. We excluded communities along the southern and eastern edges of the study area when ≤50% of the buildings within that community were outside the Westside study area. For each community, we intersected all the historic and simulated wildfires and then, using the intersected perimeters, calculated the resulting building exposure within the community resulting from each fire. This allowed us to assign a list of simulated fires to each community and to then use exceedance probability curves to compare the likelihood of exposure magnitudes across communities [76]. At the community level, we also plotted the relationship between average annual burn probability (averaged across all burnable pixels for each community) and the maximum simulated exposure for that community.
2.5. Exposure Source
We evaluated the source of simulated community exposure in several ways. First, we calculated ignition exposure potential as a way to visually evaluate where the most consequential wildfires ignite. Using the per-fire exposure calculations described above, we added building exposure as an attribute to each simulated ignition point and then interpolated the surface using inverse distance weighting with a power of 0.5, 90 m cell size, and a 7.5 km search radius. Maximum exposure values were binned and mapped using a quantile method.
Second, we evaluated the source of community exposure by assessing where exposure events ignited with respect to land management types and the wildland urban interface (WUI). Land management types were classified into six categories: US Forest Service, Other Federal, State, Local, Private Non-Industrial, and Private Industrial [77]. For all fires that resulted in any exposure within a community, we calculated the number of buildings and proportion of total exposed buildings that resulted from ignitions in each major land management type. Similarly, for WUI classes, we calculated the number of exposed buildings and proportion of total exposure that resulted from ignitions in each of four classes based on 2010 population and vegetation conditions [78]. The four WUI classes include intermix, interface, forest, and urban.
We did not evaluate exposure source for historical buildings because at the time of writing, ignition locations for the 2020 fires had not been confirmed and these fires comprise an overwhelming majority of historical exposure.
2.6. Exposure Metric Comparison
Community vulnerability can be characterized and communicated in multiple ways depending on the context and audience. Our aim was to visually compare community maximum simulated building exposure (our analysis) with two other common metrics: (1) mean annual building exposure and (2) worst-case scenario conditional net value change (cNVCworst) for communities. We calculated mean annual building exposure for each Westside community by multiplying the community-wide annual burn probability reported in [39] by the total number of buildings within the Westside study area within each community. cNVCworst was calculated following methods presented by Thompson et al. (2016) and using data layers from [31]. Conditional net value change (cNVC) is a risk metric that reports the expected consequences, given that a fire occurs. cNVC is calculated using pixel-level wildfire intensity values derived from simulations, pixel-based maps of highly valued resources and assets (“HVRA”, i.e., buildings), and expert-derived response functions that indicate how HVRAs respond to fires of a given intensity on a scale of −100 to 100. Pixel-based calculations were summed within simulated fire perimeters to calculate per-fire cNVC. For our purposes, we were interested in comparing per-fire worst-case scenarios to maximum building exposure so we calculated cNVCworst for each simulated fire using the Where People Live HVRA (WPL) and associated response functions described in [31]. cNVCworst is interpreted as the worst-case scenario simulated consequences given that a fire of the highest intensity occurs.
3. Results
3.1. What Were the Magnitudes and Sizes of Simulated Disasters and How Did They Compare to Historical Events?
Simulations produced 507,539 fires that intersected the Westside study area, of which 21% (n = 108,114) exposed at least one building within the study area. Per-fire building exposure ranged between one and 2340 buildings (Figure 2A). The maximum simulated event exposed more than twice as many buildings as the largest historical exposure event which exposed 1120 buildings (Figure 2A). In fact, the simulations included 22 fires that resulted in exposure equal to or greater than the worst historical exposure, and 2526 examples of plausible disasters (Figure 2B). The historical fire dataset includes only eight Westside disasters, and of those, six occurred simultaneously in September 2020 and account for 75% of all exposure in the historical dataset. Furthermore, the 2020 Beachie Creek Fire alone accounts for 37% of all historical building exposure from 1984 to 2020 (Table 1).
The most disastrous simulated wildfires were not necessarily the largest simulated wildfires (Figure 3). The median size of simulated exposure events was 159 ha (mean = 961 ha) and the median size of simulated disasters was 1041 ha (mean = 2677 ha). Simulations did include 173 very large wildfires (≥20,234 ha), but only 32 of those were also disasters based on our definitions (Figure 3). However, simulated very large wildfires made up approximately 11% of all simulated exposure despite comprising just 0.03% of simulated fires. In contrast, the largest historical fires were also the greatest exposure events (Figure 3). Historical exposure events ranged in size between approximately 20 and 80,000 ha, and the median size of historical exposure events was 3248 ha (mean = 10,650 ha). In contrast to simulations, very large fires accounted for 74% of all historical Westside building exposure, most of which was the result of five fires that occurred in 2020 (Table 1). Notably, not all historical disasters were the result of very large fires; the Echo Mountain Complex (2020) burned just 996 hectares but exposed 363 buildings, the third greatest exposure event since 1984 (Table 1).
3.2. Which Communities Have Experienced Historical Exposure, and Which Communities Are Vulnerable to Plausible Future Disasters?
Historically, only 1.5% (n = 10) of Westside communities experienced any building exposure between 1984 and 2020. However, when communities did experience exposure, 70% of instances were of disaster proportions. The greatest historical exposure events were the result of the 2020 wildfires affecting communities in the Oregon west Cascades (i.e., Gates, Estacada, and Springfield) and Oregon coastal communities such as Rose Lodge (Figure 4). The greatest historical community exposure event was 684 buildings, a result of the Holiday Farm Fire (2020) outside Springfield, Oregon (Figure 4). It is important to look at disasters as absolute exposure and also a percent of the total community. For instance, 684 buildings exposed in Springfield makes up just 3% of all buildings in the community. In contrast, 513 buildings in Detroit, Oregon accounted for 97% of all community buildings.
Simulations revealed that plausible disasters are widespread, occurring across the Westside. Ninety-six percent (n = 617) of communities experienced simulated exposure, and 43% (n = 275) of communities experienced a simulated disaster (Figure 4). Simulated community maximum exposure ranged between one and 2098 buildings. The highest simulated exposure event occurred in Rochester, Washington, a town with no historical exposure and very limited fire occurrence in general. The Rochester simulated fire burned 2098 structures which is approximately 39% of all structures within the community. In almost all cases, simulated community maximum exposure greatly exceeded historical exposure. Notable exceptions are in several of the communities affected by the 2020 wildfires, where historical fires exposed more structures than simulations (Figure 4).
Simulations also reveal that some communities are more vulnerable than others to plausible disasters and that the communities with the most simulated disasters or the highest maximum exposure are not necessarily the communities with highest annual burn probabilities (Figure 5). Many communities experienced more than one simulated disaster and, in some cases, the communities that experienced the most disasters were also communities with the highest annual burn probabilities such as Myrtle Creek, Oregon (Figure 5). In many other cases, a high number of disasters were simulated in communities with comparatively low annual burn probabilities, as in Yelm, Washington, where the annual burn probability is an order of magnitude lower than Myrtle Creek, Oregon (Figure 5). Exceedance probabilities in Figure 6 help to illustrate the range of community vulnerability to disasters. Communities with comparatively high annual burn probability such as Brookings, Oregon have elevated likelihood of disasters that expose 500–1000 buildings, whereas communities with comparatively low annual burn probabilities have shallow exceedance probability curves with very long tails (Figure 6).
3.3. What Is the Source of Simulated Community Disaster Exposure?
Approximately half of all simulated community exposure was the result of fires that ignited within the community where the exposure occurred. Simulated community disasters were particularly likely to be the result of an intracommunity fire; 86% of disaster-caused exposure was the result of an ignition inside the community where the exposure occurred. This is a somewhat different picture of the source of risk compared to historical exposure, 97% of which was the result of fires that exposed buildings in multiple communities.
Across the Westside, ignitions in forest-type WUI classes were the source of approximately 50% of all simulated exposure (Table 2). However, the majority of all the simulated disaster exposure in communities was the result of ignitions on land managed by private, non-industrial owners (Table 2) and ignitions in close proximity to population centers (Figure 7). Despite comprising 26% of the Westside study area, fires originating on national forests accounted for just 8% of exposure incurred during a disaster. For individual communities, the composition of exposure sources varied (Figure 8). For instance, there were eight simulated disasters in Duluth, WA, all of which ignited in urban WUI classes whereas the 20 simulated disasters in Toledo, WA ignited in all major WUI classes (Figure 8).
3.4. How Does Maximum Simulated Exposure Compare to Other More Common Risk Assessment Metrics Derived from Simulations?
Mean annual exposure and cNVCworst each illustrate unique spatial distributions of community wildfire risk (Figure 9). Westside community mean annual building exposure ranges from ≤0.01 across much of the region to 16.2 buildings in Trout Lake, Washington and, in general communities with the highest mean annual exposure are communities on the eastern and southern edge of the study area (Figure 9A). Notably, many of the communities that had the highest maximum simulated exposure (Figure 4B) have some of the lowest mean annual exposure values (Figure 9A). Like maximum simulated exposure (Figure 4B) and distinct from mean annual exposure (Figure 9A), cNVCworst appears to highlight communities in more populous parts of the Westside (Figure 9B). cNVCworst values ranged from −23,374 to zero and communities in and around the Portland and Seattle metro areas have some of the most negative cNVCworst values, as do communities in coastal Oregon (Figure 9B).
4. Discussion
It may seem obvious that Westside communities are, in fact, exposed to wildfire disasters. The 2020 wildfire season and periodic events over the last century have demonstrated the capacity of Westside forests to produce large, intense, and destructive wildfires. Yet, Westside communities are rarely if ever explicitly included in risk and exposure reports that rank communities across the PNW [38,39]; and, when annual burn probability-based wildfire risk is mapped across the PNW, there is little to no visual complexity across Westside landscapes, leaving Westside planners and managers curious about how to characterize their risk [31,79]. The hazard of adhering strictly to actuarial definitions of risk is that the plausibility of surprising fires, the very fires that inevitably have the greatest consequences, is not adequately communicated. So, while the plausibility of Westside disasters is not inconceivable in and of itself, our aim here was to demonstrate the value of specific and intentional methods for characterizing community wildfire risk in low-frequency fire regimes [14,16,20,26]. The concept of anticipating surprises has been applied to Westside wildfire risk when considering the potential impacts of climate change, but here we demonstrated the utility of surprise analysis for contemporary risk, showing that in low-frequency fire regimes with limited empirical records, past fires are by no means a complete projection of plausible disasters in the near future [41]. Planners and managers can use our results, or re-create the analysis for other resources (i.e., water provision infrastructure), to build narrative scenarios and further explore community vulnerability [80,81].
Interestingly, by comparing simulated and historical events in our analysis, we observed that many Westside communities are vulnerable to disasters which are unlike any historical events. Simulated disasters were novel with respect to the specific communities affected and the magnitude of per-fire exposure. Such results might be expected given the paucity of empirical information from areas with low-frequency fire regimes. Over 40% of Westside communities are vulnerable to plausible disasters, including communities in and around the most populous parts of the region and communities with no historical exposure record. For instance, Rochester, WA, which experienced the greatest disaster in the simulations, is listed in [39] as having zero annual residential exposure and which we illustrated in Figure 7 in the lowest category of annual building exposure. Consistent with previous observations that simulated annual building exposure commonly exceeds empirical annual exposure across the western United States, we found that simulated disasters greatly exceeded any historical fire in terms of number of buildings exposed [82].
Our results indicated that future disasters are most likely to be the result of fires that ignite on private land in relatively close proximity to community infrastructure. Despite the fact that this finding is consistent with similarly modeled exposure analyses simulations, it is still somewhat unanticipated for two reasons [38,39,83,84]. First, ignitions in interface or intermix WUI, in close proximity to structures, are generally discovered quickly, agencies can respond efficiently, and, historically, suppression reactions have been particularly strong [85,86]. One explanation for our finding is that FSim uses a perimeter trimming algorithm to simulate the effect of suppression on fire size but it is agnostic of suppression concerns such as proximity to high-value resources or suppression difficulty [46]. A second reason that our results are unanticipated is that they do not obviously align with historical precedent. The majority of historical exposure in our analysis was the result of a handful of fires in 2020 that appear to have ignited on U.S. Forest Service land, although at the time of this writing ignition locations have not been confirmed. Regardless of the land manager associated with their ignition, those few fires (i.e., Beachie Creek and Holiday Farm, Table 1) were very large fires that ignited remote from the communities where they eventually caused enormous exposure (i.e., Springfield and Gates, Figure 4). Given the historical record and the limitations we noted regarding FSim’s suppression module, readers might choose to downplay the plausibility of simulated disasters, but we caution against doing so. While the simulated disasters are without obvious precedent in the historical record, they are similar to the Alameda Drive Fire which burned approximately 1200 hectares in southwest Oregon, just outside our study area, but exposed over 1600 buildings, destroyed 700 and claimed four lives. Similar events have not taken place in Westside communities in the historical record, but our results demonstrate that many Westside communities have combinations of fuel continuity and building density capable of facilitating a disaster [74,82,87].
In order to specifically characterize Westside community wildfire risk, we combined probabilistic and surprise analysis techniques. Similar to previous studies, we avoided the limitations of mean-based rankings by using exceedance probabilities help to clearly illustrate plausible outlier events as well as to communicate the likelihood of outlier events [76,82,84,88]. In the instance of Westside community exposure, exceedance probability curves help reinforce the idea that disasters are exceedingly unlikely especially on an annual basis, but are possible and could have extreme consequences. In contrast to mean-based statistics which distill exposure down to a single number, exceedance probabilities illustrate an entire spectrum of exposure for each community [1]. Further, by comparing community annual burn probability with community exposure exceedance probabilities, we demonstrated that the former does not accurately predict or adequately communicate the magnitude of plausible disasters.
Similarly, we included a visual comparison of our exposure metric with cNVCworst which uses methods outlined in [76] to characterize per-fire worst-case scenarios with respect to communities. Even though cNVCworst is intended to communicate wildfire consequences, which exposure does not, our results demonstrate by comparison how cNVCworst is relatively intractable outside the context of a risk assessment, arguably limiting opportunities for effective risk communication. Integrated metrics have gained favor to facilitate prioritization across diverse resources and assets, but other studies have demonstrated examples of ways that simple exposure metrics, as opposed to integrated risk, can be used to prioritize risk reduction activities [40,88].
One limitation of our method that deserves attention is that the simulations we used were performed in 2017, prior to the record-setting fire season of 2020. FSim is a Monte Carlo-style model and generates tens of thousands of versions of a plausible fire season based on recent-historical fire occurrence and climate. Across all those iterations the simulations did produce fires that were novel in terms of size compared to empirical fires in the period 1984–2017 but did not produce any fires as large or synchronous as the four largest in 2020. This could reflect the paucity of historical fire data and extreme weather information available to calibrate FSim. Inclusion of the 2020 fires and their associated weather conditions would most likely have some impacts on future simulations. The weather that fueled Westside wildfires in 2020 was anomalous and does not appear in the weather records used to calibrate the simulations we used [88]. Accordingly, the simulations and our subsequent analysis should not be interpreted as true worst-case scenarios. While downslope winds such as those that fueled the 2020 wildfires in western Oregon are generally considered to have fueled many of the region’s most significant historical fires spanning the last century or more, there is no clear linkage with human-caused climate change and no agreement on whether or not similar meteorological events and their consequent fires could become more common [89]. Nonetheless, climate change is expected to increase fuel aridity and susceptibility in the event of future fires, and across the western United states, highly synchronous fire events are increasingly likely and could facilitate disasters not simply because there are more simultaneous fires, but by depleting available national suppression resources [90,91,92].
Following on the limitation described above, ongoing future research is aimed at developing methods to incorporate rare, historical fires into the FSim calibration process. Generally, large fire size and frequency are calibrated in FSim using a comprehensive dataset of ignition locations and fire sizes for fires in the period 1992–2015 because it is the most complete and spatially explicit dataset that is available [93]. Planned work is aimed at modifying the calibration process to also include pre-1992 fires in the calibration dataset so very rare fires are included in the range of plausible events. Additional future, related research could aim to describe the myriad ways and settings in which risk assessments are designed and outputs are being used. We chose to focus our analysis on a low-frequency fire regime, where risk characterization is particularly challenging. However, the importance of audience-tailored risk communication is important in any natural hazard setting [20,94,95]. As simulated, burn probability-based quantitative risk assessments are increasingly common and the outputs are widely distributed to broad audiences, not just fire managers, for use in diverse planning settings, it is important that wildfire risk scientists continue to deliver information in equally diverse formats to meet broad audience interests [96]. To that end, future work might also seek to describe how different audiences respond to the differences between integrated risk metrics and exposure analyses. Finally, as simulated outputs become increasingly useful in decision support settings and as we learn more about Westside fire regimes, there is an opportunity to update model calibration techniques to include more than just the past 30 years of fire history and to, hopefully, better account for rare events.
5. Conclusions
Characterizing wildfire risk in low-frequency fire regimes is particularly challenging because common mean-based risk assessments do not explicitly communicate the plausibility of low-probability, high-consequence wildfires. In addition, empirical information relied upon to simulate risk that may only cover a few decades of historical records may poorly describe plausible wildfire events. Combining surprise analysis with probabilistic techniques provides an opportunity to anticipate future wildfire disasters while still informing resource prioritization schemes. In this study, we demonstrated the utility of simulations with respect to identifying plausible future Westside community wildfire disasters and found that simulations illustrated exposure events nearly twice as great as any single historical event, and also that nearly 50% of communities are vulnerable to future disasters even though few have experienced exposure in the past four decades. We found that simulated exposure was most commonly the result of ignitions that occurred on private land in forest and intermix WUI types. Finally, comparison of our results with other approaches to risk characterization demonstrated that surprise analysis is complementary and key, and highlighting Westside communities which are otherwise absent from mean-based analyses.
As wildfire risk assessment output applications become increasingly diverse, our results provide one method for adapting them for improved risk communication in landscapes where wildfire is an infrequent threat. Future work could aim to better understand and characterize other forms of empirical information to calibrate models, as well as the myriad applications of wildfire risk assessment outputs and, further, could aim to better understand how diverse audiences respond to different risk characterization methods.
6. Patents
This section is not mandatory but may be added if there are patents resulting from the work reported in this manuscript.
Author Contributions
Conceptualization, A.M., B.K.K. and J.B.K.; methodology, A.M.; formal analysis, A.M.; investigation, A.M.; data curation, A.M.; writing—original draft preparation, A.M.; writing—review and editing, B.K.K. and J.B.K.; visualization, A.M.; supervision, B.K.K. and J.B.K.; project administration, A.M., B.K.K. and J.B.K.; funding acquisition, B.K.K. and J.B.K. All authors have read and agreed to the published version of the manuscript. Please turn to the CRediT taxonomy for the term explanation. Authorship must be limited to those who have contributed substantially to the work reported.
Funding
Research was supported in part by an appointment to the United States Forest Service (USFS) Research Participation Program administered by the Oak Ridge Institute for Science and Education (ORISE) through an interagency agreement between the U.S. Department of Energy (DOE) and the U.S. Department of Agriculture (USDA). ORISE is managed by ORAU under DOE contract number 18IA11261952030. Funding for the appointment was provided by the US Forest Service’s Westside Fire and Climate Adaptation Research Initiative, Pacific Northwest Research Station, Corvallis, Oregon.
Data Availability Statement
Data are available upon request.
Acknowledgments
All opinions expressed in this paper are that of the authors and do not necessarily reflect the policies and views of USDA, DOE, or ORAU/ORISE. We thank Teresa Alcock at Oregon Dept. of Forestry as well as Alan Ager and Michelle Day from the U.S. Forest Service, Rocky Mountain Research Station for conceptual support and guidance. We would like to also acknowledge all the federal, state and private partners engaged in Westside Fire and Climate Adaptation Research Initiative for their help identifying and articulating the challenges of risk characterization in low-frequency fire regimes.
Conflicts of Interest
The authors declare no conflict of interest.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figures and Tables
Figure 1. (A) Major Westside land manager types and landscape features; (B) annual burn probabilities for the PNW adapted from Gilbertson-Day et al. (2018) and historical wildfire perimeters (1984–2020); (C) population density.
Figure 2. (A) Boxplot of historical and simulated exposure magnitudes plotted on log scale. Labels indicate median and maximum number of buildings exposed. (B) Frequency distribution of simulated disasters (n = 2526). Historical disasters were not included because there were only eight historical disasters (see Table 1).
Figure 3. (A) Frequency of simulated disasters resulting from fires of a specific size. There were only eight historical disasters, see Table 1. (B) Per-fire building exposure as a function of fire size for simulated (black) and historical fires (red). Vertical dashed line is the threshold for very large fires (20,234 ha) and horizontal dashed line is threshold for a disaster (100 buildings).
Figure 4. (A) Maximum historical community exposure; (B) Maximum simulated community exposure; and (C) The difference between simulated and historical maximum community exposure events. In panels A and B, labeled communities are the five communities with the greatest maximum exposure values. In panel C, labels corresponding to areas mapped as blue indicate the communities where historical exposure exceeded simulated maximum exposure.
Figure 5. (A) The number of simulated community disasters as a function of annual burn probability. (B) Magnitude of community maximum simulated (black) and historical (red) exposure events as a function of average annual burn probability. Labels in both panels identify the ten communities with the greatest simulated maximum exposure.
Figure 6. Exceedance probabilities developed from the simulated dataset illustrating the likelihood that, given a disaster occurs in the community, exposure will exceed a certain number of buildings. The lines in color correspond with the ten communities that had the highest simulated maximum exposure while all other communities are shown in gray in the background.
Figure 7. Maximum exposure potential illustrates the relative magnitude of maximum building exposure that could result from an ignition at the given location. Exposure values binned in quantiles so the “Very Low” category accounts for pixels with the bottom 20% of exposure values and the “Very High” category includes all pixels with the top 20% of exposure values.
Figure 8. Relative proportion of each community’s disaster exposure that results from unique landowners (A) and unique WUI classes (B). The top 25 communities with the greatest simulated maximum exposure values are shown.
Figure 9. (A) mean annual community building exposure; and (B) community cNVCworst.
Top ten historical fires (1984–2020) that resulted in the greatest building exposure.
Fire Name | Year | Area Burned (ha) | Buildings |
---|---|---|---|
Beachie Creek | 2020 | 78,218 | 1120 |
Holiday Farm | 2020 | 40,031 | 845 |
Echo Mountain Complex | 2020 | 996 | 363 |
Riverside | 2020 | 55,905 | 357 |
Lionshead | 2020 | 74,402 | 309 |
Archie Creek | 2020 | 40,581 | 292 |
Hatchery Complex | 1994 | 11,033 | 258 |
B & B Complex | 2003 | 36,938 | 209 |
Norse Peak | 2017 | 20,645 | 96 |
Chetco Bar | 2017 | 78,860 | 68 |
Percent of simulated exposure that resulted from ignitions occurring in each WUI and land manager classes. Disaster exposure includes only exposure from simulated fires that exposed ≥ 100 buildings.
Source | Portion of Study Area | Total Exposure | Disaster Exposure |
---|---|---|---|
WUI Class | |||
Forest | 77% | 51% | 43% |
Intermix | 9% | 35% | 41% |
Interface | 3% | 8% | 10% |
Non-Vegetated | 10% | 5% | 5% |
Land Manager | |||
Private Non-Industrial | 57% | 82% | 89% |
USFS | 26% | 11% | 2% |
Other Federal | 8% | 2% | 1% |
Local | <1% | <1% | <1% |
Private Industrial | 3% | <1% | <1% |
State | 4% | <1% | <1% |
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2021 by the authors.
Abstract
Optimized wildfire risk reduction strategies are generally not resilient in the event of unanticipated, or very rare events, presenting a hazard in risk assessments which otherwise rely on actuarial, mean-based statistics to characterize risk. This hazard of actuarial approaches to wildfire risk is perhaps particularly evident for infrequent fire regimes such as those in the temperate forests west of the Cascade Range crest in Oregon and Washington, USA (“Westside”), where fire return intervals often exceed 200 years but where fires can be extremely intense and devastating. In this study, we used wildfire simulations and building location data to evaluate community wildfire exposure and identify plausible disasters that are not based on typical mean-based statistical approaches. We compared the location and magnitude of simulated disasters to historical disasters (1984–2020) in order to characterize plausible surprises which could inform future wildfire risk reduction planning. Results indicate that nearly half of communities are vulnerable to a future disaster, that the magnitude of plausible disasters exceeds any recent historical events, and that ignitions on private land are most likely to result in very high community exposure. Our methods, in combination with more typical actuarial characterizations, provide a way to support investment in and communication with communities exposed to low-probability, high-consequence wildfires.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details



1 USDA Forest Service, ORISE Fellow, Pacific Northwest Research Station, Corvallis, OR 97331, USA
2 USDA Forest Service Pacific Northwest Research Station, Corvallis, OR 97331, USA;
3 USDA Western Wildland Environmental Threat Assessment Center, Corvallis, OR 97754, USA;