Introduction
In the last 15 years a number of large earthquakes occurred worldwide, often accompanied by destructive tsunamis. In several cases, the overall size of the earthquake and/or of the tsunami was unanticipated and some surprising features were observed in terms of event scaling (e.g., source aspect ratio, tsunami height versus earthquake magnitude) or associated damage ; a striking example is the 2011 Tohoku earthquake and tsunami and the consequent nuclear disaster at the Fukushima Daiichi power plant . These events called attention to the need for a systematic reevaluation of current tsunami hazard estimates.
In the past, tsunami hazard was mostly studied through simulations of one or
several scenarios, either the “worst credible”
To account for potential variability and frequency of tsunamis, and for
the inclusion of alternative models needed for quantifying epistemic
uncertainty, the probabilistic treatment of a large set of potential tsunami
sources is essential. Probabilistic tsunami hazard analysis (PTHA) probably
began with the seminal papers of and .
Uncertainty quantification is one of the main goals of PTHA, and
progressively more refined uncertainty treatment was achieved following the
2004 Indian Ocean tsunami
Nevertheless, the computational procedure for a complete evaluation of PTHA, fully exploring the natural variability of the sources, can be extremely demanding and unfeasible in some cases, particularly when inundation calculations are involved for a target site . This is due to the very large number of numerical simulations of tsunami generation, propagation and inundation on high-resolution topobathymetric models which is, in principle, required. For example, numerous realizations of heterogeneous slip are needed and usually obtained with stochastic procedures . Indeed, heterogeneous earthquake slip is known to strongly influence the tsunami run-up and not only in the near-field of the source . Among the first attempts to quantify tsunami hazard uncertainty related to heterogeneous earthquake slip, and should be mentioned. Recently, proposed a multi-hazard approach including stochastic slip distributions and cascading earthquake–tsunami risk evaluation; however, they considered a limited number of tsunami scenarios without fully characterizing the epistemic uncertainties associated with the key model components. Consequently, an efficient methodology is needed to make (onshore) PTHA a computationally affordable task.
The issue has been dealt with in various ways in several studies . In particular, focused on seismic PTHA (SPTHA), that is, on hazards associated with tsunamis generated by coseismic seafloor displacement. They developed a method for significantly reducing the computational cost of the assessment, using a source-filtering procedure based on a cluster analysis. This allows for the identification of a subset of important sources able to preserve the accuracy of results. Furthermore, proposed a general procedure for the joint and unbiased quantification of aleatory and epistemic uncertainty, including the filtering procedure of while stressing the importance of source completeness.
Here, we merge the two approaches of and , fully developing a method that enables the quantification of the local-scale SPTHA, and also devoting a large effort to refining the procedure and introducing several critical improvements. On the one hand, we modified the filtering procedure to enhance its computational efficiency and to adapt it to multiple sources covering a large range of source–target distances. On the other hand, to improve the accuracy we applied a separate treatment for remote and local sources, selecting near-field scenarios on the basis of the similarity of the coseismic tsunami initial conditions. This is crucial, as near-field sources may challenge the general assumption made by , where, for a given source, offshore tsunami amplitude profiles are considered representative of the coastal inundation behind them, regardless of the source location with respect to the coast. This was reasonable in that particular case study, since they considered either far-field scenarios with respect to the target coast or scenarios which deformed the coast in a definite direction; that is, the coast always subsided due to subduction earthquakes on the Hellenic arc. In the presence of more complex (and realistic) local fault distribution, causing either subsidence, uplift or mixed patterns depending on the case, tsunami intensity can be unpredictably reduced or enhanced with respect to the corresponding offshore tsunami wave . Hence, in general, offshore tsunami profiles could be strongly misleading when coseismic deformation of the coast occurs. This may affect the tail of the hazard curves in particular (i.e., largest intensities), to which local sources significantly contribute, as also demonstrated by the disaggregation analysis in . For all of these reasons, special treatment is needed for local sources, based on the source similarities and considering the coseismic onshore displacement, rather than the offshore tsunami wave similarity.
For illustrative purposes, we considered a target site in the
central Mediterranean as a use case, the Milazzo oil refinery (Sicily, Italy) in
the southern Tyrrhenian Sea. This site was previously selected within the
framework of the EU project STREST (
It is worth noting that this paper is strictly methodological and aims to propose a computationally efficient procedure for local-scale SPTHA, rather than provide a realistic site-specific hazard assessment. In fact, for the sake of simplicity and in order to not deflect attention from the core of the method, no efforts have been dedicated to constrain and test the (regional) seismic rates; the local seismic sources and their geometry and dynamics, including slip distributions; or the accuracy of topobathymetric data used in tsunami simulations. Moreover, the filtering procedure has been forced to minimize the number of explicit numerical simulations, allowing a relatively large accepted error with respect to the complete initial set of sources due to the introduced approximations.
The paper is organized as follows: Sect. resumes the general outline of the method for SPTHA evaluation, as proposed by and , while the innovative developments are described in Sect. ; Sect. focuses on the illustrative application; conclusive remarks are drawn in Sect. .
A general review of the original method for SPTHA
Using regional-scale SPTHA as input for local-scale (site-specific) SPTHA, through the approach proposed by , is a task already foreseen by (see Fig. 1 therein). However, this possibility was neither applied nor tested in practice, since their main focus was the application for regional-scale analyses. The details of the general method have been already thoroughly described and validated in the previous studies. Here we will summarize the basic concepts.
The whole general procedure for site-specific SPTHA can be outlined in four steps: (1) the definition of earthquake scenarios and their probability, allowing, in principle, a full exploration of source aleatory uncertainty; (2) the computation, for each source, of tsunami propagation up to a given offshore isobath; (3) the selection of the relevant scenarios for a given site through a filtering procedure and the corresponding high-resolution tsunami inundation simulations; and (4) the assessment of local SPTHA with joint aleatory and epistemic uncertainty quantification by means of ensemble modeling, including modeling alternatives that were eventually implemented during steps (1)–(3).
In step (1), all the modeled earthquakes must be defined for different
seismic regions, which are assumed to be independent of each other. The
earthquake parameters and their logically ordered conditional probabilities
are treated by means of an event tree technique. We emphasize that the common
assumption that tsunami hazard is dominated at all timescales by subduction
zone earthquakes is not used: non-subduction faults, unknown offshore faults,
and diffuse seismicity around major known and well mapped structures are all
taken into account. This strategy attempts to prevent biases in the hazard
due to the incompleteness of the source model . The
seismicity related to the main and better-known fault interfaces is treated
separately to the rest of the crustal and diffuse seismicity. A similar
approach has been used in the recent TSUMAPS-NEAM project
(
In step (2), for each scenario retrieved from step (1) the corresponding tsunami generation and propagation is numerically modeled, and the pattern of offshore tsunami height above the sea level () is evaluated at a set of points along the 50 m isobath in front of the target area. To provide the input for , these points may be limited to a profile in front of the site. The length of this control profile must be tuned depending on the morphology and the extension of the target coast: a compromise has to be reached, as too few points could make the profile not representative enough, while too many points could downgrade the performance of the subsequent filtering procedure . Actually, the optimal length is the shortest one that makes the offshore hazard curves stable with respect to the source selection, and a further increase in length would increase the computational effort without significantly altering the results.
In step (3), using the offshore profiles calculated during step (2), a filtering procedure is implemented to select a subset of relevant sources, based on the similarity of the associated tsunami intensity and not on the similarity or spatial proximity of the sources themselves. The selected sources, each of them representative of a cluster of sources producing comparable tsunamis offshore of the target area, are then used for explicit inundation modeling on high-resolution topobathymetric grids. This approach allows for a consistent and significant reduction of the computational cost while preserving the accuracy. However, considered a limited set of sources. The extension to a much larger set of potential sources requires some modifications to the method that, along with several other improvements, are proposed in this study, as reported in Sect. .
Incidentally, we note that other wave properties, such as period or polarity, could be relevant in the framework of the cluster analysis. However, briefly discussed this issue, also with respect to the length of the control profile, as discussed above. Nevertheless, this is a point probably deserving further investigation, considering that showed how inundation from the Tohoku 2011 tsunami was variably controlled by long-period offshore tsunami components on flat coastal plains and shorter-period peaks in steep coastal areas. Indeed, used two cycles of a tsunami for identifying similar waves. Conversely, since, as described in the next section, offshore wave comparison is not used anymore in the near-field, this issue will not apply for local sources.
In step (4), local SPTHA is quantified. The inundation maps for each representative scenario from step (3) are aggregated according to the probabilities provided during step (1), assigning the total probability of a cluster to the representative scenario. Aleatory and epistemic uncertainty are simultaneously quantified by means of an ensemble-modeling approach over alternative implementations of the previous steps. In practice, steps (1) to (3) can be iterated for each alternative model, and these alternatives can be weighted according to their credibility and the possible correlation among the models. The results are finally integrated through ensemble modeling into a single model which expresses both aleatory and epistemic uncertainty.
Improvements in the filtering procedure
The described method has been tested by both and . However, focused on the filtering procedure of step (3), adopting a simplified configuration for the source variability, in which sources were allowed only within the Hellenic arc; that is, an area that is a source area too small to represent the full earthquake aleatory variability. On the other hand, applied the approach to a regional study that extended to the Ionian Sea in the central Mediterranean Sea. The quantification of the local hazard is instead discussed only in theory, without proposing any application.
The original method of adopted a two-stage procedure.
In the first stage, scenarios giving a negligible contribution to offshore of the target area were removed, assuming they would lead to negligible inundation. Hereafter, we call this stage “Filter H”.
As a second filtering stage, a hierarchical cluster analysis (HCA) was
carried out, separately for each earthquake magnitude class included in the
seismicity model, under the assumption that sources producing similar
offshore along the control profile will also produce similar
inundation patterns. The distance (d) between two patterns from two
different scenarios, and , was measured by a cost function previously
used to compare tsunami waveforms in source inversion studies
where runs over the control points of the 50 m isobath. For
each cluster, the scenario closer to the centroid was selected as the
reference scenario, with an associated probability corresponding to the
probability of occurrence of the entire cluster. The optimal number of
clusters (i.e., the “stopping criterion”) was assessed by analyzing the
variance within each cluster (hereafter, “intra-cluster”) as a function of
the number of clusters and selecting the largest value still producing
significant changes, according to the so-called Beale test
Schematic diagram of the computational procedure for evaluating site-specific SPTHA, with special attention paid to step (3) (see text).
[Figure omitted. See PDF]
We implemented a different strategy to further reduce the number of explicit tsunami simulations and introduced a separate treatment for local and remote sources. In particular, the source scenario filtering procedure was revised to improve both the computational efficiency and accuracy, allowing for a full scalability to the source variability of typical SPTHA (millions of scenarios located all over an entire basin). A schematic diagram of the new procedure is sketched in Fig. 1, with (right, step 3b) or without (left, step 3a) the separation between near- and far-field.
We still kept Filter H but also adopted an additional filter on the occurrence probability (hereafter, “Filter P”; see Fig. ), discarding scenarios whose cumulative mean annual rate (mean of the model epistemic uncertainty) is below a fixed threshold. Filter P works as follows. Scenarios are sorted according to their mean annual rate and the less frequent are removed until the cumulated rate reaches the selected threshold. This allows for a further reduction of the number of required numerical simulations. On the other hand, this operation introduces a controlled downward bias to the estimated hazard, whose upper limit corresponds (on average) to the probability threshold of Filter P. This threshold can be set at a negligible level in the framework of the overall analysis and/or with respect to other uncertainties. In addition, it can be empirically checked as to what extent this affects the results by analyzing the offshore hazard curves at the control points. This check was quantitatively done by computing the maximum deviation between the mean hazard curves at each control point before and after Filter P was applied. We also notice that, as reported in Fig. , Filter P was always applied after Filter H due to strategic reasons of optimization; in fact, the cumulated rate curve is lowered by the removal of small events (i.e., producing small ), which typically feature high-occurrence probability. As a consequence, a greater number of scenarios can be removed before reaching the imposed threshold, making Filter P more efficient.
Additionally, the cluster analysis stage was modified. Firstly, we used a different algorithm, as the large number of source scenarios due to a realistic fault variability distribution in some cases can make the HCA a computationally unaffordable task. We implemented the more efficient -medoids-clustering procedure based on the minimization of the sum of the intra-cluster distances, that is, the distances between each element of a cluster and the cluster centroid. Strong constraints on the distances result in a more accurate partitioning in terms of similarity among the elements of each cluster but lead to a great number of clusters. Instead, larger ranges of acceptability increase the efficiency of the algorithm, in terms of number of resulting clusters, to the detriment of the accuracy. The cluster analysis was performed separately for groups of scenarios with similar mean along the profile instead of grouping scenarios by earthquake magnitude class. This makes the partitioning more efficient, as the earthquake magnitude can not be considered the only parameter controlling the tsunami intensity, as it was for the limited set of sources adopted by . The cluster distance was measured by Eq. (), but we updated the stopping criterion, which is now related to the maximum allowed intra-cluster variance, rather than being a blind optimization of the number of clusters. More specifically, to control the dispersion within each cluster, we set a threshold for the maximum allowed squared Euclidean distance. This threshold was empirically fixed by comparing the offshore hazard curves before and after the analysis and assuming an acceptable range of variability, analogous to the approach used for Filter P.
Finally, and probably most importantly, in order to deal with the contribution from local sources properly, we implemented two independent filtering schemes for distant and local sources. Indeed, special treatment for near-field sources is needed, as the coseismic deformation can modify the actual local tsunami intensity at the nearby coast, due to coastal uplift or subsidence. As a consequence, the offshore tsunami amplitude profiles generated by such events may fail to be representative of the coastal inundation, and separate modeling is required using the coseismic deformation as the metric for source proximity in the cluster analysis (details below). This issue was somehow hidden in , due to the relatively small aleatory variability they considered being the source either in the far- or near-field, depending on the target site, but never mixed together. In addition, this separation may favor some refinement of the near-field source discretization and modeling, such as a denser sampling of geometrical parameters and/or the introduction of heterogeneous slip distributions.
For testing the proposed method, we replaced step (3) either with step (3a) or step (3b), as displayed in Fig. . The workflow of step (3a) is almost equivalent to the original procedure of improved by the aforementioned changes related to the algorithm optimization, whereas the separate treatment of near- and far-field sources is included in step (3b). Step (3a) is then used in this study as a term of comparison for the new scheme.
In step (3a), three sequential tasks were performed, namely Filter H, Filter P and the cluster analysis based on the offshore tsunami amplitudes.
In step (3b), local and distant sources were firstly detected based on the coseismic deformation produced by the earthquake near and on the target coast. The procedure was then split into two parallel paths, which need to be merged at the end when evaluating SPTHA (Fig. ). As far as the far-field scenarios are concerned, the same workflow as step (3a) was followed. Near-field scenarios, which, in principle, should be individually modeled, were also filtered in order to reduce the number of explicit inundation simulations: this of course introduces a new approximation, which, however, is better than aggregating local and remote scenarios on the basis of the offshore tsunami amplitudes. Filter H was applied as well, but a smaller threshold value was chosen: a more conservative approach is indeed recommended at this stage, as offshore values could be strongly misleading when significant coastal coseismic deformation occurs. Then Filter P was employed, and finally a cluster analysis was performed by comparing the coseismic deformations instead of the (unrepresentative) offshore tsunami amplitudes. For each local source, the vertical component of the coseismic displacement was calculated on a 2-D grid centered around the fault, which has a size equal to 3 times the fault length. Then the cluster analysis was carried out, separately for each magnitude, by comparing the coseismic fields point to point within the grid. In this case, the cluster analysis is based on the squared Euclidean distance instead of the cost function; the stopping criterion is also evaluated through the Euclidean distance, since the coseismic field can take both positive and negative values.
The selected earthquake scenarios from step (3a) or from the two branches (near- and far-field) of step (3b) were then used for high-resolution inundation simulations and combined together in step (4) when evaluating SPTHA. A practical example of the whole procedure is illustrated in the next section.
The Milazzo oil refinery (Sicily, Italy) use case
The described procedure was applied to a test site, Milazzo, located on the
northeastern coast of Sicily, Italy, within the Mediterranean Sea. The site
houses an oil refinery, one of the nonnuclear critical infrastructures
selected as a case study in the framework of the EU project STREST
(
Due to the illustrative purposes of the present work, some strong assumptions
were imposed during the filtering procedure to drastically reduce the number
of required explicit numerical simulations. The tuning of the filtering
thresholds is not the objective of the present work; in fact, this application
is aimed at highlighting that inaccurate (biased) evaluation of site-specific
tsunami hazard would be obtained if scenarios located in the near-field of
the target area are not properly taken into account, irrespective of the
completeness and consequent complexity of the hazard assessment. However,
more sanity and sensitivity tests would be mandatory for a finer tuning of thresholds and
modeling in the case of a real application. For example, the
modeling of near-field scenarios is expected to be dependent on the source
parameters, especially concerning the heterogeneous slip distribution on the
fault plane
(a) Map of the whole simulation domain used for the application at the target site Milazzo (Sicily, Italy). The orange circles are the geometrical centers of the crustal faults affecting the target site, while the magenta and the green regions are the slab models of the Hellenic and Calabrian arcs, respectively. Blue circles are the geometrical centers of the near-field sources, as detected in step (3b) (see text). The inset highlights the offshore points along the 50 m isobath (red points). The blue square within the inset is the area displayed in the bottom panel. (b) Zoom of the Milazzo oil refinery, with the position of the 95 points at the edges of the storage tanks (red points).
[Figure omitted. See PDF]
Regarding step (1), the adopted seismicity model was previously developed in
the framework of the EU project ASTARTE (
Tsunami amplitudes, step (2), were computed on a control profile made of 11 points offshore of the Milazzo target area (on the 50 m isobath), as reported in Fig. a. To save computational time, scenarios from step (1) were not individually simulated but were obtained by a linear combination of precalculated tsunami waveforms produced by Gaussian-shaped unitary sources . The Gaussian propagation was modeled by the Tsunami-HySEA code, a nonlinear hydrostatic shallow-water multi-GPU code based on a mixed finite-difference–finite-volume method .
Step (3) was addressed by independently performing the two branches (3a) and (3b), as discussed in the previous section, and then comparing results to assess the importance of the separate treatment of the near-field sources.
In step (3a), thresholds were fixed at 1 m for Filter H and yr for Filter P. This resulted in discarding scenarios with an individual mean annual rate below yr, causing a maximum bias in the offshore mean hazard curves of about 10 % in the considered range of tsunami intensities, with respect to the curves obtained without Filter P. At the end of the filtering procedure, imposing a threshold equal to on the intra-cluster variance, we obtained 776 clusters, each associated with a representative scenario. That is, we had a reduction above 99 %. Figure S1 in the Supplement shows the comparison among the mean offshore hazard curves at the 11 control points, as well as among some quantiles of the epistemic uncertainty, for the filtered and original set of scenarios.
It is worth stressing that the efficiency of the filters is artificially enhanced here by the imposed high thresholds, especially as far as Filter H is concerned. While 1 m is not an acceptable value in the case of a real hazard assessment, it is suitable for illustrative purposes. In any case, this filter, independently of the chosen threshold, is not expected to affect subsequent steps of the procedure for tsunami intensities above the threshold. Conversely, we performed a sensitivity analysis on the threshold imposed on the intra-cluster variance for the cluster analysis; Fig. S2 shows the percentage differences between the offshore hazard curves computed from the complete initial set of sources and the filtered set. The red box corresponds to the chosen threshold value (0.2): it appears evident that a smaller value would have allowed a stronger constraint on the error introduced by the cluster analysis while considerably increasing the number of resulting clusters. Vice versa, higher thresholds produce a smaller number of clusters but fail in reproducing the hazard (an error of up to 40 %). In the case of a real hazard assessment, this analysis would help in choosing an optimal threshold.
In step (3b), we considered as local scenarios, requiring separate processing, sources that generate a coseismic vertical displacement greater than or equal to 0.5 m on a set of near-field points; that is, the 11 control points on the 50 m isobath plus 95 inland points strategically located at the edges of the refinery storage tanks, as shown in Fig. b. We found 4721 scenarios in the near-field (see Fig. a). Afterwards, for both branches we applied Filters H and P as well, using the following thresholds: for far-field scenarios, Filter H m and Filter P yr; for near-field scenarios, Filter H m, according to the more conservative approach described in the previous section, and Filter P yr. Note that the Filter P threshold was set to half of the value used in step (3a), in order to keep a total maximum theoretical bias on the hazard curves at yr, as in step (3a), considering that Filter P is separately applied both to far- and near-field scenarios. Then the cluster analysis was carried out on the tsunami amplitudes for far-field scenarios (using a threshold equal to on the intra-cluster variance) and on the coseismic deformation for near-field scenarios (using a 10 % threshold for the intra-cluster variance). We obtained 634 and 520 clusters for remote and local sources, respectively. Thus, the total number of representative scenarios (1154) to be explicitly modeled corresponds to a reduction of above 99 % of the initial set of sources.
Figure 3(a) Mean hazard curves for at all points within the highest-resolution grid, as obtained from step (3a) of the SPTHA procedure (see text and Fig. ). Gray and blue colors refer to inland and offshore points, respectively. The bold black line represents the envelope of the curves from step (3b). Red dashed lines represent the values used to obtain probability (Fig. ) and hazard (Fig. ) inundation maps. (b) Same as (a) but using step (3b). The bold black line is the envelope of the curves from step (3a). (c) Relative differences in terms of exceedance probability (over 50 years) as a function of , computed as . The black line is the median of the point distribution; the green dashed lines correspond to the 16th and 84th percentile. The mean uplift (MU) on a random point along the coastline (see text) is also superimposed (purple line). (d) Same as (c) but in terms of as a function of the exceedance probability (over 50 years).
[Figure omitted. See PDF]
Figure 4Probability maps (inner grid; see text) for derived from the hazard curves in Fig. at two different thresholds (2 and 3 m) for steps (3a) and (3b) and relative differences, which are computed as follows: .
[Figure omitted. See PDF]
Figure 5Hazard maps (inner grid; see text) for derived from the hazard curves in Fig. at two different ARPs ( yr, yr) for steps (3a) and (3b) and relative differences, which are computed as follows: .
[Figure omitted. See PDF]
Inundation simulations from step (3) were carried out again with the
Tsunami-HySEA code, exploiting the nested grid algorithm. We used four-level
nested bathymetric grids with a refinement ratio
equal to 4 and increasing resolution from 0.4 arcmin ( m), to
0.1 arcmin ( m), to 0.025 arcmin ( m) and to
0.00625 arcmin ( m). The largest grid was obtained by resampling
the SRTM15 bathymetric model
(
During step (4), SPTHA was evaluated in parallel using results both from steps (3a) and (3b) in order to compare the outcomes of the two different workflows and estimate the impact of the special treatment of near-field sources on the site-specific hazard assessment. Note that alternative models for the epistemic uncertainty were only considered at step (1), that is, only as far as the probabilistic earthquake model is concerned, since the model was used.
Figures to compare the results from steps (3a) and (3b) in terms of mean hazard curves and inundation (both probability and hazard) maps for . At first glance, differences are appreciable in both the curves and the maps. It is worth noting that results at m can be (negatively) biased since they are depleted from the scenarios removed by Filter H, both in step (3a), as is clearly shown in Fig. S1, and in the far-field branch of step (3b). Curves and maps will be described in more detail below.
The hazard curves in Fig. a and b show the mean (mean of the model epistemic uncertainty) exceedance probability over 50 years for (evaluated assuming a Poisson process, as in ), plotted for each point of the finest-resolution grid. Panel (c) of the same figure displays the one-by-one relative differences in terms of exceedance probability (over 50 years), as a function of , between the step (3a) and (3b) curves at each grid point. For values of greater than 1 m, the relative differences are systematically positive, meaning that without the correction for near-field scenarios, step (3a), the tsunami hazard would be overestimated. In Fig. S4 a sample of curves at a few inland points (one every thousand points) is displayed for a direct curve-by-curve comparison between the two approaches. This confirms that, overall, the uncorrected approach leads to hazard overestimation. We may argue that this is true in the case of this specific setting, as a lower “corrected” hazard means that the predominant effect of local sources contributing to a specific point on the hazard curve is due to the coastal uplift, which in turn decreases the tsunami hazard. For example, a cluster may mix far- and near-field sources, which could be misrepresented by one far-field source selected as cluster representative. In our case, there might be a prevalence of clusters causing coastal uplift from the near-field sources. The situation may be the opposite for a different source–target configuration; that is, coastal subsidence could be predominant, causing an hazard increase, which without correction could be underestimated. To confirm our inference we performed some further testing. For each hazard intensity, and only for the mean model of the epistemic uncertainty, we computed the coseismic coastal displacement in the inner grid, averaged both over all of the scenarios and over all of the coastal points (purple line in Fig. c). This quantity can be regarded as the mean uplift (MU) on a random point on the coastline. Scenarios of different types contribute to MU, both in far-field scenarios, which do not alter the coastline, and in near-field scenarios, which may include a mixture of sources producing both coastal subsidence and uplift. In more detail, we firstly performed a weighted average of the coseismic displacements from each cluster centroid for each with weights equal to the annual probability of the individual earthquakes. These probabilities are set to zero if the earthquake does not deform the coastline (i.e., for far-field sources) or if the generated tsunami does not exceed the given value (i.e., that scenario does not contribute to the hazard at that point). The weighted average is then normalized to the total probability of the near- and far-field sources contributing to the tsunami hazard for that threshold. The resulting MU on each coastal point is plotted for different values of m in Fig. S5 (blue lines). The displacements due to the single cluster representatives are also shown (red lines). We note that although single scenarios produce both positive and negative coastal displacements, the predominant contribution is unveiled by the sum of the different clusters, which is definitely positive. Additionally, for higher intensities the contributing scenarios (decreasing in number, as expected) generate displacements which are smaller and smaller, as important uplift would significantly limit tsunami inundation. Finally, we further averaged the resulting values along the coastline, obtaining the purple curve in Fig. c. We notice that the absolute MU value in meters turns out to be rather small as a result of the average over sources that cause either uplift or subsidence, or no coastal displacement at all. Regardless, the positive values obtained for m indicate that the uplift of the coast is prevailing, consistent with the positive percentage differences retrieved between the two approaches. Very little difference is retrieved between the “corrected” and the “uncorrected” filtering procedures for smaller values of , that is, below the Filter H threshold.
Finally, in Fig. d the relative differences are also shown in terms of as a function of exceedance probability (over 50 years). In the low-probability region, typically corresponding to high , the overestimation by step (3a) is confirmed; conversely, for exceedance probability greater than , which is likely to correspond to small , a greater dispersion with both positive and negative values is observed.
Probability and hazard inundation maps can be achieved by vertically and horizontally cutting the hazard curves at chosen fixed values, in order to give a geographical representation of results. As each hazard curve corresponds to a grid point, the probability maps are obtained by plotting all the probability values for a fixed value of the intensity metric on a map. Instead, in the hazard maps the intensity values are plotted for a fixed exceedance probability, corresponding to a given average return period (ARP). In Fig. we computed the exceedance probability maps for m and m, while in Fig. we extracted the hazard maps for yr and yr (corresponding to an exceedance probability over 50 years equal to 2.5 and , respectively).
For the selected values, the maps confirm what we already discussed about the curves: mostly positive relative differences both inland and offshore are inferred from the probability maps, as shown in Fig. c and f, which are even larger than 50 %; these differences are positive at a larger number of points for the higher intensity, consistent with Fig. c. We recall that positive differences mean that the “uncorrected” procedure, step (3a), actually overestimates the tsunami hazard at the target site. Negative inland values are also observed for m, but they occur for very low-probability values and should not be further investigated. We also notice that the area inundated with a non-negligible probability decreases in size when increasing the value as expected. In the hazard maps (Fig. ) a complex pattern is revealed when inspecting the relative differences (panels c and f), as both positive and negative values are retrieved. This happens because the analyzed ARPs lie in the low-intensity range. The inundated area, opposite to the previous case, is consistently more extended for larger ARPs.
Further details about the comparison can be found by analyzing the curves and the maps for reported in Figs. S6 to S8. We merely note that, when the correction for near-field is taken into account, the inundation maps highlight an enhanced current vorticity near the docks (Figs. S7b, e and S8b, e), which is a known effect due to the flow separation at the tip of a breakwater . As the probability and hazard maps aggregate several different sources, the hazard integral may tend to average and cancel out different source effects while enhancing local propagation features. The presence of such persistent physically meaningful effects only in the maps retrieved using step (3b) confirms the importance of the special treatment. In other words, the blind cluster analysis, step (3a), exclusively based on the offshore tsunami amplitudes, likely produced a nonrepresentative selection of the important scenarios, as it could aggregate or even remove important local scenarios.
Conclusions
We proposed a computationally efficient approach to achieve robust assessment of site-specific SPTHA, developing an improved version of the method of and .
The procedure is based on four steps, which can be summarized as follows: (1) the definition of the set of earthquake scenarios and their mean annual rates, exploring the source aleatory uncertainty; (2) the computation of tsunami propagation up to an offshore isobath; (3) the implementation of a filtering procedure to select relevant scenarios for the target site, which are then explicitly modeled; and (4) the assessment of local SPTHA through an ensemble-modeling approach, to jointly quantify aleatory and epistemic uncertainty, stemming from alternative models for steps (1)–(3).
In the present work we focused on step (3), modifying the filtering procedure to enhance the computational efficiency and introducing a separate treatment for sources located in the near-field, to take into account the effect of the coseismic deformation on the tsunami intensity. To achieve this aim we implemented a new procedure including a correction for near-field scenarios and some numerical improvements. We benchmarked the new approach against an algorithm that is essentially equivalent to the original method of . The correction is crucial as the latter is based on the assumption that offshore tsunami profiles are representative of the inundation at the nearby coast, which might be true if a coseismic deformation of the coast is not involved; otherwise, seafloor uplift or subsidence make the assumption invalid. Consequently, local and remote sources must be separately treated by means of different filtering procedures. This may also allow for a specific and more detailed parameterization of the near-field sources, to which the local hazard is known to be more sensitive.
We tested the procedure by investigating a case study, i.e., Milazzo (Sicily). The work only has illustrative purposes and is not intended as a real hazard assessment at that site, due to some simplifications in the adopted model.
The new implemented filtering procedure allows for a consistent reduction of the number of tsunami inundation simulations and therefore of the computational cost of the analysis. It is worth stressing that in this specific application the computational efficiency was artificially enhanced by limiting the source variability, as well as by imposing high filter thresholds. In fact, a real assessment is expected to deal with a greater number of scenarios, provided that a finer tuning of the threshold values is carried out. This may in particular affect the computational cost related to the analysis of the near-field sources, for example when using stochastic slip distributions.
The most striking result is that the separate treatment of near-field sources provides significantly different and physically more consistent results with respect to the “uncorrected” procedure, showing that near-field sources must be specifically dealt with when evaluating site-specific SPTHA. We recall that the two approaches (with or without the correction for near-field) only differ in the way local sources are treated. Hence, the different results do not depend on the specific filtering thresholds but only on the coseismic deformation induced by local sources, which, if properly accounted for, modifies the effective tsunami hazard. Actually, for the specific configuration of this use case, our findings reveal that not considering an appropriate correction for near-field would lead to an overestimation of the tsunami hazard for greater than 1 m, and this overestimation is correlated to dominant coastal uplift. However, different cases in terms of overestimation and underestimation may occur at different sites, depending on the relative source–site configuration. We also observe that Milazzo is located in an area featuring relatively low near-field tsunamigenic seismicity with respect to other areas in the Mediterranean Sea. Nevertheless, the method turns out to be sensitive even to relatively low displacements and allows for the detection and removal of significant biases from near-field sources.
The proposed method is suitable to be applied to operational assessments and
also for improving local (multi-hazard) risk analyses
Future work will be devoted to the use of the procedure to perform real local hazard assessments, exploiting the regional hazard retrieved from the TSUMAPS-NEAM project.
Data availability
The bathymetric data used for tsunami modeling are in the public domain and referenced in the text.
The regional hazard data are taken from Selva et al. (2016), as referenced in the text.
Hazard results are available upon request to the first author.
The supplement related to this article is available online at:
Author contributions
MV, SL and JS elaborated the method and analysed the results; MV performed the analysis, included numerical simulations, and prepared the figures; MV and SL wrote the manuscript; RT and JS reprocessed data from regional hazard; FR gave support in preparing the numerical simulations and analyzing results; BB produced the bathymetric grids; all authors reviewed the manuscript.
Competing interests
The authors declare that they have no conflict of interest.
Acknowledgements
The authors want to thank the EDANYA Research Group at the University of
Malaga for providing the Tsunami-HySEA code for tsunami simulations. We
acknowledge useful discussions with William Power and Gareth Davies during
the early stages of this work. We also acknowledge constructive comments by
four anonymous referees, which allowed for
significant improvement of this paper. The work was partially funded by the
INGV–DPC Agreement (Annex B2) and by the STREST project, EC Seventh
Framework Programme (FP7/2007–2013), grant agreement no. 603389. All of the
figures were created using either MATLAB (
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2019. This work is published under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Site-specific seismic probabilistic tsunami hazard analysis (SPTHA) is a computationally demanding task, as it requires, in principle, a huge number of high-resolution numerical simulations for producing probabilistic inundation maps. We implemented an efficient and robust methodology using a filtering procedure to reduce the number of numerical simulations needed while still allowing for a full treatment of aleatory and epistemic uncertainty. Moreover, to avoid biases in tsunami hazard assessment, we developed a strategy to identify and separately treat tsunamis generated by near-field earthquakes. Indeed, the coseismic deformation produced by local earthquakes necessarily affects tsunami intensity, depending on the scenario size, mechanism and position, as coastal uplift or subsidence tends to diminish or increase the tsunami hazard, respectively. Therefore, we proposed two parallel filtering schemes in the far- and the near-field, based on the similarity of offshore tsunamis and hazard curves and on the similarity of the coseismic fields, respectively. This becomes mandatory as offshore tsunami amplitudes can not represent a proxy for the coastal inundation in the case of near-field sources. We applied the method to an illustrative use case at the Milazzo oil refinery (Sicily, Italy). We demonstrate that a blind filtering procedure can not properly account for local sources and would lead to a nonrepresentative selection of important scenarios. For the specific source–target configuration, this results in an overestimation of the tsunami hazard, which turns out to be correlated to dominant coastal uplift. Different settings could produce either the opposite or a mixed behavior along the coastline. However, we show that the effects of the coseismic deformation due to local sources can not be neglected and a suitable correction has to be employed when assessing local-scale SPTHA, irrespective of the specific signs of coastal displacement.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer