1 Introduction
Holocene climate variability is of key interest to our society, since it represents a time of moderate natural variations prior to anthropogenic disturbance, often referred to as a baseline for today's increasing greenhouse effect driven by mankind. Yet, high-resolution studies are still very sparse and therefore limit the investigation of decadal and even centennial climate variations over the course of the Holocene. One of the first studies about changes in the Holocene climate was conducted in the early 1970s by Denton and Karlén (1973). The authors investigated rapid changes in glacier extents around the globe potentially resulting from variations of Holocene climatic conditions. Mayewski et al. (2004) used these data as the base of a multiproxy study identifying rapid climate changes (so-called RCCs) globally distributed over the whole Holocene time period. Although not all proxy data showed an equal behaviour in timing and extent during the quasi-periodic RCC patterns, the authors found evidence for a highly variable Holocene climate controlled by multiple mechanisms, which significantly affects ecosystems (de Beaulieu et al., 2017; Crausbay et al., 2017; Pál et al., 2016) and human societies (Holmgren et al., 2016; Lespez et al., 2016). Precise high-resolution temperature estimates can contribute significantly to the understanding of these mechanisms. Ice-core proxy data offer multiple paths for reconstructing past climate and temperature variability. The studies of Cuffey et al. (1995), Cuffey and Clow (1997) and Dahl-Jensen et al. (1998) demonstrate the usefulness of inverting the measured borehole-temperature profile for surface-temperature-history estimates for the investigated drilling site using a coupled heat- and ice-flow model. Because of smoothing effects due to heat diffusion within an ice sheet, this method is unable to resolve fast temperature oscillations and leads to a rapid reduction of the time resolution towards the past. Another approach to reconstruct past temperature is based on the calibration of water-stable isotopes of oxygen and hydrogen (, ) from ice-core water-samples assuming a constant (and mostly linear) relationship between temperature and isotopic composition due to fractionation effects during ocean evaporation, cloud formation and snow and ice precipitation (Johnsen et al., 2001; Stuiver et al., 1995). This method provides a rather robust tool for reconstructing past temperature for times where large temperature excursions occur when an adequate relationship is used (Dansgaard–Oeschger events, glacial–interglacial transitions; Dansgaard et al., 1982; Johnsen et al., 1992). Also, in the Holocene where Greenland temperature variations are comparatively small, seasonal changes of precipitation as well as of evaporation conditions at the source region may contribute to water-isotope-data variations (Huber et al., 2006; Kindler et al., 2014; Werner et al., 2001). A relatively new method for ice-core-based temperature reconstructions uses the thermal fractionation of stable isotopes of air compounds (nitrogen and argon) within a firn layer of an ice sheet (Huber et al., 2006; Kindler et al., 2014; Kobashi et al., 2011; Orsi et al., 2014; Severinghaus et al., 1998, 2001). The measured nitrogen- and argon-isotope records of air enclosed in bubbles in an ice core can be used as a paleothermometer due to (i) the stability of isotopic compositions of nitrogen and argon in the atmosphere at orbital timescales and (ii) the fact that changes are only driven by firn processes (Leuenberger et al., 1999; Mariotti, 1983; Severinghaus et al., 1998). To robustly reconstruct the surface temperature for a given drilling site, the use of firn models describing gas and heat diffusion throughout the ice sheet is necessary to decompose the gravitational from the thermal-diffusion influence on the isotope signals.
This work addresses two issues relevant for temperature reconstructions based on nitrogen and argon isotopes. First, we introduce a novel, entirely automated approach for inverting gas-isotope data to surface-temperature estimates. For that, we force the output of a firn-densification and heat-diffusion model to fit gas-isotope data. This methodology can be used for many different optimization tasks not restricted to ice-core data. As we will show, the approach works in addition to for all relevant gas-isotope quantities (, , ) and for Holocene and glacial data as well. Furthermore, the possibility of fitting all relevant gas-isotope quantities, individually or combined, makes it possible, for the first time, to validate the temperature solution gained from a single isotope species by comparison to the solution calculated from other isotope quantities. This approach is a completely new method which enables the automated fitting of gas-isotope data without any manual tuning of parameters, minimizing any potential “subjective” impacts on temperature estimates as well as working hours. Also, except for the model spin-up, the presented temperature reconstruction approach is completely independent from water-stable isotopes (, ), which provides the opportunity to validate water-isotope-based reconstructions (e.g. Masson-Delmotte, 2005) or reconstructions where water isotopes are used together with or Ar (e.g. Capron et al., 2010; Huber et al., 2006; Landais et al., 2004). To our knowledge, there are only two other reconstruction methods independent from water-stable isotopes that have been applied to Holocene gas-isotope data, without a priori assumption on the shape of a temperature change. The studies from Kobashi et al. (2008a, 2017) use the second-order parameter to calculate firn-temperature gradients, which are later temporally integrated from past to future over the time series of interest using the firn-densification and heat-diffusion model from Goujon et al. (2003). Additionally Orsi et al. (2014) use a linearized firn-model approach together with and data to extract surface-temperature histories. The method presented here can be used when no data are available, which is often the case because is a more analytically challenging measurement and is not as commonly measured as and further allows a comparison among solutions obtained from any of the available isotope quantities.
Second, we investigate the accuracy of our novel fitting approach by examining the method on different synthetic nitrogen-isotope and temperature scenarios. The aim of this work is to study the uncertainties emerging from the algorithm itself. Furthermore, the focal questions in this study are what the minimal mismatch in for Holocene-like data we can reach is and what the implication for the final temperature mismatches is. Studying and moreover answering these questions makes it mandatory to create well-defined targets and related temperature histories. It is impossible to answer these questions without using synthetic data in a methodology study. The aim is to evaluate the accuracy and associated uncertainty of the inverse method itself to then later apply this method to real , or datasets, for which of course the original driving temperature histories are unknown.
Figure 1
Schematic illustration of the presented gas-isotope fitting algorithm. The algorithm is implemented in four steps: step 1: first-guess input calculation; step 2: iteratively Monte Carlo based input change (indicated by the open half cycles); step 3: signal complementation with high-frequency information; step 4: final correction. In contrast to the synthetic data study on Holocene-like data where the accumulation input Acc was fixed, for the proof of concept on glacial data, the accumulation and temperature input was iteratively changed in parallel indicated by the grey variables Acc and Acc. For the glacial study, only steps 1 and 2 were used.
[Figure omitted. See PDF]
2 Methods and data2.1 Reconstruction approach
The problem that we deal with is an inverse problem, since the effect, observed as variations, is dependent on its drivers, i.e. temperature and accumulation-rate changes. Hence, the temperature that we would like to reconstruct depends on and accumulation-rate changes. To solve this inverse problem, the firn-densification and heat-diffusion model (from now on referred to as firn model), which is a non-linear transfer function of temperature and accumulation rate to firn states and relates to values, is run iteratively to match the modelled and measured values (or other gas species). The automated procedure is significantly more efficient and less time consuming than a manual approach. The Holocene temperature reconstruction is implemented by the following four steps (Fig. 1):
-
Step 1: a prior temperature input (first guess) is constructed, which serves as the starting point for the optimization.
-
Step 2: a long-term solution which passes through the data (here synthetic target data) is generated following a Monte Carlo approach. It is assumed that the smooth solution contains all long-term temperature trends (centuries to millennia) as well as firn-column height changes (temperature and accumulation-rate dependent) that drive the gravitational background signal in .
-
Step 3: the long-term temperature solution is complemented by superimposing short-term information directly extracted from the data (here synthetic target data). This step adds short-term temperature changes (decadal) in the same time resolution as the data.
-
Step 4: the gained temperature solution is further corrected using information extracted from the mismatch between the synthetic target and modelled time series.
The functionality of the presented inversion algorithm is schematically displayed in Fig. 1. It guides the reader through chapters and documents where variables, listed in Table 1, are in use. In the following, a detailed description of each step is given.
Table 1
Used variables and acronyms with their explanations.
Variable | Explanation |
---|---|
thermal-diffusion constant calculated from Eq. (12) | |
slope for calibration (surface-temperature spin-up), Eq. (13) | |
Acc | accumulation-rate data |
Acc | first-guess (prior) input accumulation-rate data |
Acc | modelled accumulation-rate data from the final Monte Carlo output |
intercept for calibration (surface-temperature spin-up), Eq. (13) | |
COP | cut-off period for cubic-spline filtering |
corr | index related to the final correction step (step 4) |
CZ | convective zone |
mean mismatch (general) calculated from Eq. (1) | |
, | pointwise mismatches (general) |
mean mismatch of | |
mean mismatch of ( vs. ) calculated from the output of the final correction (step 4) | |
mean mismatch of ( vs. ) calculated from the output of the first-guess data | |
mean mismatch of ( vs. ) calculated from the output of the high-frequency step (step 3) | |
mean mismatch of ( vs. ) calculated from the final Monte Carlo output (step 2) | |
minimization criterion for the proof of concept on glacial data as used in Eq. (14) | |
mean mismatch of temperature | |
mean mismatch of temperature ( vs. ) calculated from the output of the final correction (step 4) | |
mean mismatch of temperature calculated from vs. | |
mean mismatch of temperature ( vs. ) calculated from the output of the high-frequency step (step 3) | |
mean mismatch of temperature calculated from the final output of the Monte Carlo step (step 2) | |
modelled signal from the output of the final correction (step 4) | |
gravitational component of the signal | |
modelled signal from the output of the first-guess data (step 1) | |
modelled signal from the output of the high-frequency step (step 3) | |
modelled (smooth) signal from the final Monte Carlo output (step 2) | |
modelled signal (general) | |
thermal-fractionation/thermal-diffusion component of the signal | |
synthetic target (fitting target) | |
age | gas-age – ice-age difference |
age | final gas-age – ice-age output from the Monte Carlo step (step 2) |
correction values calculated from and | |
correction values calculated from the linear dependency of xcf | |
correction values calculated from the linear dependency of xcf | |
molar mass-difference between the heavy and light isotopes | |
high-frequency temperature signal obtained from Eq. (3) (step 3) | |
uncertainty of the data as used in Eq. (14) | |
uncertainty of the age data as used in Eq. (14) | |
gravitational acceleration | |
g,0 | index related to the first-guess (prior) data (step 1) |
hf | index related to the high-frequency step (step 3) |
time index | |
IF | “integrated factor” calculated from Eq. (6), needed for the final correction step (step 4) |
running index for the Monte Carlo iterations (step 2) | |
lag | time lag attributed to the maximum of the sample cross-correlation function (xcf), (general) |
lag | time lag attributed to the maximum of the sample cross-correlation function (xcf) of IF vs. |
lag | time lag attributed to the maximum of the sample cross-correlation function (xcf) of IF vs. |
lag | time lag attributed to the minimum of the sample cross-correlation function (general) |
lag | time lag attributed to the minimum of the sample cross-correlation function of IF vs. |
lag | time lag attributed to the minimum of the sample cross-correlation function of IF vs. |
mc | index related to the Monte Carlo step (step 2) |
mc,fin | index related to the final Monte Carlo output (step 2) |
number of data points of the target |
Continued.
Variable | Explanation |
---|---|
length of the Holocene temperature vectors (w/o spin-off) | |
thermal-diffusion sensitivity calculated from Eq. (4) | |
spline-filtered | |
vector containing uniformly distributed random numbers | |
ideal gas constant | |
ice density | |
lock-in density, density threshold for calculating | |
SD | standard deviation of the random numbers for |
standard deviation of () | |
2 | 95 % quantile of () |
standard deviation of () | |
standard deviation of () | |
2 | 95 % quantile of () |
standard deviation of () | |
, | mean firn temperature |
temperatures at the bottom of the diffusive firn layer | |
temperature signal calculated from the final correction step (step 4) | |
first-guess (prior) temperature input | |
temperature signal calculated from the high-frequency step (step 3) | |
Monte Carlo temperature guess for iteration | |
(smooth) temperature modelled from the final Monte Carlo output (step 2) | |
surface-temperature spin-up | |
temperatures at the top of the diffusive firn layer | |
wRMSE | mean squared errors weighted with data uncertainty as used in Eq. (14) |
xcf/XCF | sample cross-correlation function, needed for the final correction step (step 4) |
xcf | maximum of the sample cross-correlation function (general) |
xcf | maximum of the sample cross-correlation function of IF vs. |
xcf | maximum of the sample cross-correlation function of IF vs. |
xcf | minimum of the sample cross-correlation function |
xcf | minimum of the sample cross-correlation function of IF vs. |
xcf | minimum of the sample cross-correlation function of IF vs. |
modelled data (general), can be , or measured data (, ) | |
fitting target (general), can be synthetic , or measured data (, ) | |
, LID | lock-in depth |
(a) Used accumulation-rate input time series divided in a Holocene and a spin-up section, with time resolution in the Holocene section (20 to 10 520 years b2k) of 1 year. The time resolution for the transition between the Holocene and the spin-up section (10 520 to 12 000 years b2k) is 1 year as well. This is in opposition to the rest of the spin-up section which has a time resolution of 20 years. (b) Surface-temperature spin-up calculated from calibration. Time resolution equals the accumulation-rate spin-up section. The first-guess surface temperature input is simply a constant value.
[Figure omitted. See PDF]
2.1.1 Step 1: prior inputThe starting point of the optimization procedure is the first guess. To construct the first-guess temperature input , a constant temperature of 29.6 C is used for the complete Holocene section, which corresponds to the last value of the temperature spin-up (Fig. 2b).
2.1.2 Step 2: Monte Carlo type input generator – generating long-term solutions
During the second step of the optimization, the prior temperature input from step 1 is iteratively () changed following a Monte Carlo approach. The basic idea of the Monte Carlo approach is to generate smooth temperature inputs by superimposing low-pass-filtered values of uniformly distributed random values on the prior input . Then, the new input is fed to the firn model and the mismatch (with ) between the modelled (here ), calculated from the model output, and the synthetic (here ) is computed for every time step () of the target data according to
1 (Note: if not otherwise stated, all mismatches in this study labelled with “” are calculated similar to Eq. 1.)
Table 2Summary for the Monte Carlo approach: mismatch between the modelled (or temperature) values using the first-guess input and the synthetic (or temperature) target for each scenario. is the mismatch between the modelled (or temperature) using the final Monte Carlo temperature solution and the synthetic (or temperature) target for each scenario.
Scenario | S1 | S2 | S3 | S4 | S5 | H1 | H2 | H3 |
---|---|---|---|---|---|---|---|---|
(permeg) | 13.3 | 48.4 | 27.0 | 23.3 | 22.4 | 23.8 | 24.1 | 23.8 |
(permeg) | 11.3 | 12.4 | 12.7 | 11.9 | 11.5 | 5.8 | 6.9 | 8.2 |
improvement (permeg) | 2.0 | 36.0 | 14.3 | 11.4 | 10.9 | 18.0 | 17.2 | 15.6 |
improvement (%) | 15.0 | 74.4 | 53.0 | 48.9 | 48.7 | 75.6 | 71.4 | 65.5 |
No. improvements | 119 | 351 | 152 | 108 | 174 | 223 | 173 | 325 |
No. used improvements | 89 | 174 | 103 | 74 | 102 | 129 | 112 | 193 |
No. iterations | 2103 | 706 | 620 | 656 | 637 | 1636 | 1027 | 2086 |
No. tried solutions | 16 824 | 5648 | 4960 | 5248 | 5096 | 13 088 | 8216 | 16 688 |
Execution time (h) | 52.6 | 17.7 | 15.5 | 16.4 | 15.9 | 40.9 | 25.7 | 52.2 |
(K) | 1.24 | 5.24 | 2.45 | 2.09 | 2.17 | 2.34 | 2.38 | 2.32 |
(K) | 0.61 | 0.69 | 0.70 | 0.64 | 0.64 | 0.32 | 0.39 | 0.46 |
Temp. improvement (K) | 0.63 | 4.55 | 1.75 | 1.45 | 1.53 | 2.02 | 1.99 | 1.86 |
Temp. improvement (%) | 50.8 | 86.8 | 71.4 | 69.4 | 70.5 | 86.3 | 83.6 | 80.2 |
serves as the criterion which is minimized during the optimization in step 2. If the mismatch decreases compared to the prior input (, ), the new input is saved and used as the new guess (). This procedure is repeated until convergence is achieved, leading to the final long-term temperature . Table 2 lists the number of improvements and iterations performed for the different synthetic datasets.
The perturbation of the current guess is conducted in the following way: let be the vector containing the prior temperature input. A second vector () with the same number of elements as is generated, containing uniformly distributed random numbers within the limits of an also randomly (equally distributed) chosen standard deviation (SD). SD is chosen from a range of 0.05–0.50, which means that the maximum allowed perturbation of a single temperature value () is in a range of 5 to 50 %. Creating the synthetic frequencies, is low-pass filtered using cubic-spline filtering (Enting, 1987) with an equally distributed random cut-off period (COP) in the range of 500 to 2000 years generating the vector . Hereby the low-pass filtering of reduces the amplitudes of the perturbation of . The new surface temperature input is calculated from according to 2 The superscript “T” stands for transposed and is the by 1 matrix of ones.
This approach provides a high potential for parallel computing. In this study, an eight-core computer was used, generating and running eight different inputs of simultaneously, minimizing the time to find an improved solution. For example, during the 706 iterations for scenario S2, about 5600 different inputs were created and tested, leading to 351 improvements (see Table 2). Since it is possible to find more than one improvement per iteration step due to the parallelization on eight CPUs, the solution giving the minimal misfit is chosen as new first-guess for the next iteration step. This leads to a decrease of the used improvements for the optimization (e.g. for S2, 172 of the 351 improvements were used). Additionally, a first gas-age scale (age) is extracted from the model using the last improved conditions, which will then be used in step 3.
2.1.3 Step 3: adding short-term (high-frequency) informationIn step 3, the missing short-term temperature history providing a suitable fit between modelled and synthetic data is directly extracted from the pointwise mismatch , between the modelled obtained in step 2 and the synthetic target. Note that for a real reconstruction, this mismatch is calculated using the measured dataset instead of the synthetic one. can be interpreted in first order as the detrended high-frequency signal of the synthetic target. is transferred to the gas-age scale using age provided by the firn-model output for the smooth temperature input . This is needed to insure synchroneity between the high-frequency temperature variations extracted from the mismatch on the ice-age scale and the smooth temperature solution . Additionally, the signal is shifted by about 10 years towards modern values to account for gas diffusion from the surface to the lock-in depth (Schwander et al., 1993), which is not yet implemented in the firn model. This is necessary for adding the calculated short-term temperature changes to the smooth signal . The values are calculated according to Eq. (3):
3 using the thermal-diffusion sensitivity for nitrogen-isotope fractionation from Grachev and Severinghaus (2003):
4 is the mean firn temperature in Kelvin which is calculated by the firn model for each time point . To reconstruct the final (high-frequency) temperature input (), the extracted short-term temperature signal is simply added to the long-term temperature input : 5
2.1.4 Step 4: final correction of the surface temperature solutionFor a further improvement of the remaining and resulting surface-temperature misfits (, ), it is important to find a correction method that contains information that is also available when using measured data. The benefit of the synthetic data study is that several later-unknown quantities can be calculated and used for improving the reconstruction approach (see Sects. 3 and 4). For instance, it is possible to split the synthetic data in the gravitational and thermodiffusion parts or to use the temperature misfit, which is unknown in reality. The idea underlying the correction algorithm explained hereafter is that the remaining misfits of () and temperature () are connected to the Monte Carlo (step 2) and high-frequency part (step 3) of the reconstruction algorithm. In the present inversion framework, it is not possible to find a long-term solution (or ) which exactly passes through the (or ) target in the middle of the variance in all parts of the time series. This leads to a slight over- or underestimation of and their corresponding temperature values . For example, a slightly too low (or too high) smooth temperature estimate leads to a small increase (or decrease) of the firn-column height, creating a wrong gravitational background signal in on a later point in time (because the firn column needs some time to react). An additional error in the thermal-diffusion signal is also created due to the high-frequency part of the reconstruction (step 3), because the high-frequency information is directly extracted from the deviation of the synthetic target and the modelled from the final long-term solution of the Monte Carlo part. Therefore, this error is transferred into the next step of the reconstruction and partly creates the remaining deviations.
Figure 3
Scenario S1: (a) cross-correlation function (xcf) between IF and the remaining mismatch in (), after the high-frequency part shows two extrema: the maximum correlation (max xcf) and the minimum correlation (min xcf). (b) Cross-correlation function (xcf) between IF and the remaining mismatch in temperature (), after the high-frequency part, shows two extrema: the maximum correlation (max xcf) and the minimum correlation (min xcf). (c) Inverting of panel (a) in (lag) and (correlation coefficient) direction. (d) Comparison between panels (a) and (b). (e) Comparison between panels (a) and (c). The temperature-correction values are calculated from the linear dependency between IF and . After shifting IF to max xcf (lag max) and to min xcf (lag min), and are calculated. Next, and are inverted. That means, for , the values are shifted back (lag max) and shifted further to lag min. After inverting, and are summed up component-wise to calculate . Using in Eq. (3) leads to the temperature-correction values which are added to the temperature .
[Figure omitted. See PDF]
To investigate this problem, the deviations of the synthetic target data to of the Monte Carlo part are numerically integrated over a time window of 200 years (Sect. 4, Supplement Sect. S3), and thereafter the window is shifted from past to future in 1-year steps resulting in a time series called IF. IF equals a 200-year running mean of . For t, the middle position of the window is allocated. The time evolution of IF is a measure for the deviation of the long-term solution (or ) from the perfect middle passage through the target data (or ) and for the slight over- and underestimation of the resulting temperature. Next, the sample cross-correlation function (xcf) (Box et al., 1994) is applied to IF and the remaining misfits of after the high-frequency part. The xcf shows two extrema (Fig. 3a), a maximum (xcf) and a minimum (xcf) at two certain lags (lag at xcf and lag at xcf). Now, the same analysis is conducted for IF versus the temperature mismatch (Fig. 3b), which shows an equal behaviour (two extrema, lag at xcf and lag at xcf). Comparing the two cross correlations shows that lag equals the negative lag and lag corresponds to the negative lag (Fig. 3d, e). The idea for the correction is that the extrema in the cross-correlation IF versus with the positive lag (positive means here that has to be shifted to past values relative to IF) creates the misfit of temperature on the negative lag (modern direction) of IF versus , and vice versa. So, IF yields information about the cause and allows us to correct this effect between the remaining mismatches, and , over the whole time series. The lags are not sharp signals, due to the fact that (i) the cross correlations are conducted over the whole analysed record, leading to an averaging of this cause-and-effect relationship and that (ii) IF is a smoothed quantity itself. The correction of the reconstructed temperature after the high-frequency part is conducted in the following way: from the two linear relationships between IF and at the two lags (lag at xcf, lag at xcf), two sets of correction values ( from xcf and from xcf) are calculated. Then, the lags are inverted (Fig. 3c, e), shifting the two sets of the correction values to the attributed lags of the cross correlation between IF and (e.g. to lag from xcf from the cross correlation between IF and ), therefore changing the time assignments of and to ) and ). Now, the and are summed up component-wise, leading to the time series . From Eq. (3) with instead of , the corresponding temperature correction values are calculated and added to the high-frequency temperature solution , giving the corrected temperature . Finally, is used to run the firn model to calculate the corrected time series. This cause-and-effect relationship found in the cross correlations between IF and , and IF and , is exemplarily shown in Fig. 3 for scenario S1 and was found for all eight synthetic scenarios. The derived correction algorithm leads to a further reduction of the mismatches of about 40 % in and temperature (see Sect. 3.2).
2.2 Firn-densification and heat-diffusion modelSurface-temperature reconstruction relies on firn densification combined with gas and heat diffusion (Severinghaus et al., 1998). In this study, the firn-densification and heat-diffusion model, developed by Schwander et al. (1997), is used to reconstruct firn parameters for calculating synthetic values depending on the input time series. It is a semi-empirical model based on the work of Herron and Langway (1980) and Barnola et al. (1991), and implemented using the Crank and Nicholson algorithm (Crank, 1975), and was also used for the temperature reconstructions by Huber et al. (2006) and Kindler et al. (2014). Besides surface-temperature time series, accurate accumulation-rate data are needed to run the model. The model then calculates the densification and heat-diffusion history of the firn layer and provides parameters for calculating the fractionation of the nitrogen isotopes for each time step, according to the following equations: is the component of the isotopic fractionation due to the gravitational settling (Craig et al., 1988; Schwander, 1989) and depends on the lock-in depth (LID) and the mean firn temperature (Leuenberger et al., 1999). is the gravitational acceleration, the molar mass difference between the heavy and light isotopes (equal to 10 kg mol for nitrogen) and the ideal gas constant. is defined as a density threshold , which is slightly sensitive to surface temperature, following the formula from Martinerie et al. (1994), with a small offset correction of 14 kg m to account for the presence of a non-diffusive zone (Schwander et al., 1997):
10 where 11 The thermal-fractionation component of the signal (Severinghaus et al., 1998) is calculated using Eq. (8), where and stand for the temperatures at the top and the bottom of the diffusive firn layer. In contrast to , which is an input parameter for the model, is calculated by the model for each time step. The thermal-diffusion constant was measured by Grachev and Severinghaus (2003) for nitrogen (Eq. 12): 12 The firn model used here behaves purely as a forward model, which means that for the given input time series the output parameters (here, finally ) can be calculated, but it is not easily possible to construct from measured isotope data the related surface-temperature or accumulation-rate histories. The goal of the presented study is an automatization of this inverse-modelling procedure for the reconstruction of the rather small Holocene temperature variations.
2.3 Measurement, input data and timescale2.3.1 Timescale
For the entire study, the GICC05 chronology is used (Rasmussen et al., 2014; Seierstad et al., 2014). During the whole reconstruction procedure, the two input time series (surface temperature and accumulation rate) are split into two parts. The first part ranges from 20 to 10 520 years b2k (called the “Holocene section”) and the second one from 10 520 to 35 000 years b2k (“spin-up section”). The entire accumulation-rate input, as well as the spin-up section of the surface-temperature input, remains unchanged during the reconstruction procedure.
2.3.2 Accumulation-rate data
Besides surface temperatures, accumulation-rate data are needed to drive the firn model. In this study, we use the original accumulation rates, reconstructed in Cuffey and Clow (1997), produced using an ice-flow model adapted to the Greenland Ice Sheet Project Two (GISP2) location but adapted to the GICC05 chronology (Rasmussen et al., 2008; Seierstad et al., 2014). A detailed description of the adaption procedure can be found in Sect. S1 of the Supplement. The raw accumulation-rate data for the main part of the spin-up section (12 000 to 35 000 years b2k) are linearly interpolated to a 20-year grid and low-pass filtered with a 200-year COP using cubic-spline filtering (Enting, 1987). For the Holocene section (20–10 520 years b2k) and the transition part between Holocene and spin-up section (10 520 to 12 000 years b2k), the raw accumulation-rate data are linearly interpolated to a 1-year grid to obtain equidistant integer point-to-point distances which are necessary for the reconstruction, and to preserve as much information as possible for this time period (Fig. 2a). Except for these technical adjustments, the accumulation-rate input remains unmodified, assuming high reliability of these data during the Holocene. The accumulation data were reconstructed using annual layer counting, and a thinning model which should lead to maximum relative uncertainty of 10 % for the first 1500 of the 3000 m ice core (Cuffey and Clow, 1997). From the three accumulation-rate scenarios reconstructed in Cuffey and Clow (1997) and adapted here to the GICC05 chronology, the intermediate one is chosen (red curves in Fig. S1 in the Supplement). Since the differences between the scenarios are not important for the evaluation of the reconstruction approach, they are not taken into account for this study.
Additionally, two sensitivity experiments were conducted (see Sect. S2 in the Supplement) in order to investigate (i) the influence of low-pass filtering of the high-resolution accumulation rates on the model outputs and (ii) the possible contribution of the accumulation-rate variability to the data during the Holocene. The first experiment shows that filtering the accumulation rates with cut-off periods in the range of 20 to 500 years has nearly no influence on the modelled or lock-in depth as long as the major trends are being conserved. The second experiment leads to the finding that the accumulation-rate variability explains about 12 to 30 % of variability. A total of 30 % corresponds to the 8.2 kyr event and 12 % to the mean of the whole Holocene period including the 8.2 kyr event. Hence, the influence of accumulation changes, excluding the extreme 8.2 kyr event, is generally below 10 % during most parts of the Holocene.
2.3.3
data
Oxygen-isotope data from the GISP2 ice-core-water samples measured at the University of Washington's Quaternary Isotope Laboratory are used to construct the surface-temperature input of the model spin-up (12 to 35 kyr b2k, Grootes et al., 1993; Grootes and Stuiver, 1997; Meese et al., 1994; Steig et al., 1994; Stuiver et al., 1995; data availability: Grootes and Stuiver, 1999). The raw data are filtered and interpolated in the same way as the accumulation-rate data for the spin-up part.
2.3.4 Surface-temperature spin-upThe surface-temperature history of the spin-up section (Fig. 2b) is obtained by calibrating the filtered and interpolated data (Eq. 13) using the values for the temperature sensitivity and offset found by Kindler et al. (2014) for the North Greenland Ice Core Project (NGRIP) ice core assuming a linear relationship of with temperature. The values 35.2 ‰ and 31.4 C are modern-time parameters for the GISP2 site (Grootes and Stuiver, 1997; Schwander et al., 1997). The spin-up is needed to bring the firn model to a well-defined starting condition that takes possible memory effects (influence of earlier conditions) of firn states into account.
Figure 4
(I) S1–S5: synthetic smooth temperature scenarios for the construction of the target-temperature data. (II) S1–S5: synthetic target surface-temperature scenarios. (III) S1–S5: corresponding synthetic target time series.
[Figure omitted. See PDF]
Figure 5(I) Synthetic target surface-temperature scenarios H1–H3. (II) Corresponding synthetic target time series H1–H3. (III) GISP2 measured data (Kobashi et al., 2008b).
[Figure omitted. See PDF]
2.3.5 Generating synthetic target dataIn order to develop and evaluate the presented algorithm, eight temperature scenarios were constructed and used to model synthetic data, which serve later as targets for the reconstruction. From these eight synthetic surface-temperature and related scenarios (S1–S5 and H1–H3), three datasets (later called Holocene-like scenarios H1–H3) were constructed in such a way that the resulting time series are very close to the values measured by Kobashi et al. (2008b) in terms of variability (amplitudes) and frequency (data resolution) of the GISP2 nitrogen-isotope data (Figs. 4, 5).
The synthetic surface-temperature scenarios S1–S5 are created by generating a long-term temperature time series () analogous to the Monte Carlo part of the reconstruction procedure for only one iteration step (see Sect. 2.1). The values for the cut-off period used for the filtering of the random values, and the SD values (standard deviation of the random values; see Sect. 2.1) for the first five scenarios can be found in Table 3. The long-term temperatures (Fig. 4I) are calculated on a 20-year grid, which is nearly similar to the time resolution of the GISP2 measurement values of about 17 years (Kobashi et al., 2008b). For the Holocene-like scenarios, the smooth temperature time series were generated from the temperature reconstruction for the GISP2 data (not shown here). The final Holocene surface-temperature solution was filtered with a 100-year cut-off to obtain the long-term temperature scenario.
Following this, high-frequency information is added to the long-term temperature histories. A set of normally distributed random numbers with a zero mean and a standard deviation (1) of 1 K for scenarios S1–S5 and 0.3 K for Holocene-like scenarios H1–H3 is generated on the same 20-year grid and added up to the long-term temperature time series. Finally, the resulting synthetic target-temperature scenarios (Figs. 4II, 5I) are linearly interpolated to a 1-year grid.
Table 3
Cut-off periods (COPs) and SD values used for creating the smooth synthetic temperature scenarios according to the Monte Carlo approach.
Scenario | COP | SD |
---|---|---|
(years) | ||
S1 | 1135 | 0.2065 |
S2 | 1007 | 0.3967 |
S3 | 1177 | 0.4002 |
S4 | 1315 | 0.2952 |
S5 | 1244 | 0.2388 |
Evolution of the mean misfit () of the modelled versus synthetic target as function of the number of iterations () for the Monte Carlo approach for all synthetic target scenarios.
[Figure omitted. See PDF]
These synthetic temperatures are combined with the spin-up temperature and are used together with the accumulation-rate input to feed the firn model. From the model output, the synthetic targets are calculated according to Sect. 2.2. The firn-model output provides ice-age as well as gas-age information. The final synthetic target time series () are set intentionally on the ice-age scale to mirror measured data, because no prior information is available for the gas-age – ice-age difference (age) for ice-core data.
Figure 7(a–c) First-guess (g,0) versus Monte Carlo (mc,fin) solution for the scenario S5: (a) synthetic target (black dotted line), modelled time series for the first-guess input (blue line) and Monte Carlo solution (red line). (b) Histogram shows the pointwise mismatches of for the first-guess solution (blue) and the Monte Carlo solution (red) versus the synthetic target. (c) Time series of the pointwise mismatches of for the first-guess solution (blue) and the Monte Carlo solution (red) versus the synthetic target. (d–f) First-guess versus Monte Carlo surface-temperature solution for the scenario S5: (d) synthetic surface-temperature target (black dotted line), first-guess temperature input (blue line) and Monte Carlo solution (red line). (e) Histogram shows the pointwise temperature mismatches for the first-guess solution (blue) and the Monte Carlo solution (red) versus the synthetic surface-temperature target. (f) Time series of the pointwise temperature mismatches for the first-guess solution (blue) and the Monte Carlo solution (red) versus the synthetic surface-temperature target.
[Figure omitted. See PDF]
3 Results3.1 Monte Carlo type input generator
Figure 6 shows the evolution of the misfit between the synthetic target data () and the modelled output of the Monte Carlo part (step 2) as a function of the applied iterations () for all synthetic scenarios. One can easily see that all scenarios show a steep decline of the mismatch during the first 50 to 200 iterations followed by a rather moderate decrease, which finally leads to a constant value. During the Monte Carlo part, it was possible to reduce the misfit compared to the first-guess solution by about 15 to 75 % depending on the scenario and the mismatch of the first-guess solution (see Table 2). This leads to a reduction of the temperature mismatches compared to the first-guess temperature mismatch of about 51 to 87 %.
Figure 8
(a–d) : (a) synthetic target (black dotted line), modelled time series after adding high-frequency information (hf, blue line) and correction (corr, red line) for the scenario S5. (b) Zoom-in for a randomly chosen 500-year interval shows the decrease of the mismatch after the correction compared to the high-frequency solution. (c) Histogram shows the pointwise mismatches of the synthetic target versus the Monte Carlo solution (mc,fin; grey), the high-frequency solution (hf; blue) and the correction (corr; red). The 95 % quantile is 4.9 permeg (yellow line) and used as an estimate for 2 uncertainty of the final solution. (d) Time series of the pointwise mismatches of the synthetic target versus the high-frequency solution (hf; blue) and the correction (corr; red). (e–h) Temperature: (e) synthetic temperature target (black dotted line), modelled temperature time series after adding high-frequency information (hf; blue line) and correction (corr; red line). (f) Zoom-in for a randomly chosen 500-year interval shows the decrease of the mismatch after the correction compared to the high-frequency solution. (g) Histogram shows the pointwise mismatches of the synthetic temperature target versus the Monte Carlo solution (mc,fin; grey), the high-frequency solution (hf; blue) and the correction (corr; red). The 95 % quantile is 0.37 K (yellow line) and used as an estimate for 2 uncertainty of the final solution. (h) Time series for the pointwise mismatches of the synthetic temperature target versus the high-frequency solution (hf; blue) and the correction (corr; red).
[Figure omitted. See PDF]
Figure 7 provides the comparison between the first-guess (g,0; step 1) and Monte Carlo (mc,fin; step 2) solution versus the synthetic target data (syn) for the modelled (Fig. 7a–c) and surface-temperature values (Fig. 7d–f) for scenario S5. Subplots (a) and (d) show the time series of the synthetic target (black dotted line), the first-guess solution (blue line) and the Monte Carlo solution (red line) for and temperature. In subplots (b) and (e), the distribution of the pointwise mismatch of the first-guess (blue) and the Monte Carlo solutions (red) versus the synthetic target data for () and temperature () can be found. Subplots (c) and (f) contain the time series for and . The data (red) are used to calculate the high-frequency signal that is superimposed to the long-term temperature solution according to Eqs. (3) and (5) (see Sect. 2.1, step 3). From Fig. 7, it can be concluded that the Monte Carlo part of the reconstruction algorithm (step 2) leads to two major improvements of the first-guess solution. First, it is obvious that the Monte Carlo approach corrects the offsets of the first-guess input (g,0), which shifts the midpoint of the distributions of and to zero (see blue against red in Fig. 7b, e). The second improvement is that the distributions become more symmetric and the misfit is overall reduced (the distributions become narrower) compared to the first guess due to the middle passage through the targets. These improvements can be observed for all eight synthetic scenarios, showing the robustness of the Monte Carlo part (see Table 2, Fig. 7).
Table 4Summary for the high-frequency (hf) and correction part (corr) of the reconstruction approach. is the mean mismatch between the modelled (or temperature) data and the synthetic (or temperature) target. is the standard deviation of the pointwise mismatches . The 95 % quantiles (2 or 2) of the pointwise (or temperature) mismatches are used as an estimate for the 2 uncertainty for the final solution (values in bold).
Scenario | S1 | S2 | S3 | S4 | S5 | H1 | H2 | H3 |
---|---|---|---|---|---|---|---|---|
(permeg) | 2.7 | 3.6 | 4.3 | 3.2 | 3.5 | 2.1 | 2.5 | 2.6 |
Improvement (hf vs. MC) (%) | 76.1 | 71.0 | 66.1 | 73.1 | 69.6 | 63.8 | 63.8 | 68.3 |
(permeg) | 3.5 | 4.6 | 5.4 | 4.0 | 4.3 | 2.7 | 3.1 | 3.3 |
(permeg) | 1.7 | 2.1 | 2.6 | 1.9 | 2.0 | 1.2 | 1.3 | 1.6 |
Improvement (corr vs. hf) (%) | 37.0 | 41.7 | 39.5 | 40.6 | 42.9 | 42.9 | 48.0 | 38.5 |
(permeg) | 2.2 | 2.7 | 3.3 | 2.4 | 2.5 | 1.5 | 1.7 | 1.9 |
2 (permeg) | 4.4 | 5.3 | 6.3 | 4.7 | 4.9 | 3.0 | 3.4 | 3.7 |
(K) | 0.20 | 0.32 | 0.33 | 0.25 | 0.27 | 0.18 | 0.21 | 0.22 |
(K) | 0.26 | 0.40 | 0.43 | 0.32 | 0.35 | 0.22 | 0.26 | 0.27 |
(K) | 0.12 | 0.18 | 0.20 | 0.14 | 0.15 | 0.10 | 0.11 | 0.12 |
(K) | 0.15 | 0.24 | 0.25 | 0.19 | 0.19 | 0.12 | 0.14 | 0.15 |
2 (K) | 0.31 | 0.48 | 0.51 | 0.38 | 0.37 | 0.23 | 0.27 | 0.30 |
Figure 8 provides the comparison between the Monte Carlo (mc,fin; step 2), the high-frequency (hf; step 3) and the correction (corr; step 4) parts of the reconstruction procedure for the scenario S5. Additional data for all other scenarios can be found in Table 4. The upper four plots (Fig. 8a–d) illustrate each reconstruction step and their effect on the modelled ; the bottom four plots (Fig. 8e–h) show the corresponding results on the temperature. Plots (a) and (e) contain the time series of the synthetic or target (syn; black dotted line), the high-frequency solution (hf; blue line) and the final solution after the correction part (corr; red line). For visibility reasons, subplots (b) and (f) display a zoom-in for a randomly chosen time window of about 500 years for the same quantities, which shows the excellent agreement in timing and amplitudes of the modelled and temperature compared to the synthetic target data. Histograms (c) and (g) and subplots (d) and (h) show the distribution and the time series of the pointwise mismatches ( for ; for temperature) between the modelled and the synthetic target data in and temperature for each reconstruction step.
Compared to the Monte Carlo solution, the high-frequency part leads to a large refinement of the reconstructions. For the mean misfits , the improvement between the Monte Carlo and the high-frequency parts is in the range of 64 to 76 % (see Table 4). This leads to a reduction of the temperature mismatches of 43 to 67 %. The standard deviations (1) of the pointwise mismatches (Fig. 8c, d, g, h) in and temperature after the high-frequency parts are in the range of about 2.7 to 5.4 permeg (one permeg equals 10) for and 0.22 to 0.40 K for the reconstructed temperatures depending on the scenario, which is clearly visible in the decreasing width of the histograms (Fig. 8c and g, blue against grey).
Figure 9
(I) Counts of the COPs and (II) counts of the SD values used to create the improvements for the smooth temperature solutions of the Monte Carlo input generator for all synthetic scenarios (S1–S5 and H1–H3). A SD value of 0.1, for example, means that the maximum allowed perturbation of one temperature value ) is 10 %.
[Figure omitted. See PDF]
The mismatches after the correction part of the reconstruction approach show clearly a further decrease of the misfits. This means that the width of the distributions of the pointwise mismatches as well as is further reduced, and the distributions become more symmetric (long tales disappear; see histograms c and g; red against blue of Fig. 8). The time series of the mismatches (Fig. 8d and h) clearly illustrate that the correction approach mainly tackles the extreme deviations (sharp reduction of extreme values' occurrence in the red distribution compared to the blue distribution) leading to a further improvement of about 40 % in and temperature. Finally, the 95 % quantiles (2, 2) of the remaining pointwise mismatches of and temperature ( or ) were calculated for the final solutions for all scenarios and are used as an estimate for the 2 uncertainty of the reconstruction algorithm (see Fig. 8c, g and Table 4). The final uncertainties (2) are of the order of 3.0 to 6.3 permeg for and 0.23 to 0.51 K for the surface temperature misfits. It is noteworthy that the measurement uncertainties (per point) of state-of-the-art measurements are of the same order of magnitude, i.e. 3 to 5 permeg (Kobashi et al., 2008b), highlighting the effectiveness of the presented fitting approach. Table 5 contains the final mismatches (2) in age between the synthetic target and the final modelled data after the correction step for all scenarios, and shows that with a known accumulation rate and assumed perfect firn physics, it is possible to fit the age history in the Holocene with mean uncertainties better than 2 years. In other words, the uncertainty in age reconstruction due to the inversion algorithm alone is of the order of 2 years.
4 Discussion4.1 Monte Carlo type input generator
Figure 9 shows the distribution of the COPs (I) and SD values (II) used to create the improvements (Sect. 2.1, step 2) for all scenarios. The cut-off periods are more or less evenly distributed, which shows that nearly all of the allowed frequency range (500 to 2000 years) was used to create the improvements during the iterations. In contrast, the distributions of the SD values show clearly that mostly small SD values are used to create the improvements, which implies that iterations with small perturbations will more likely lead to an improvement than larger ones.
Table 5
Final mismatches (age) (2) of age between the corrected solution and the synthetic targets for all scenarios.
Scenario | 2(age) | Scenario | 2(age) |
---|---|---|---|
(yr) | (yr) | ||
S1 | 1.14 | S5 | 1.24 |
S2 | 1.60 | H1 | 1.23 |
S3 | 1.98 | H2 | 1.18 |
S4 | 1.41 | H3 | 1.30 |
Figure 6 reveals a weak point of the Monte Carlo part, namely the absence of a suitable termination criterion for the optimization. The implementation until now is conducted such that the maximum number of iterations is given by the user or the iterations are terminated after a certain time (e.g. 15 h). Figure 6 shows that for nearly all scenarios it would be possible to stop the optimization after about 400 iterations, due to rather small additional improvements later on. This would decrease the time needed for the Monte Carlo part to about 10 h (a single iteration needs about 90 s). Since the goal of the Monte Carlo part is to find a temperature realization that leads to an optimal middle passage through the target data, it would be possible to use the mean difference between the target and spline-filtered data using a certain cut-off period as a termination criterion. This issue is under investigation at the moment. Another possibility to decrease the time needed for the Monte Carlo part could be an increase in the number of CPUs used for the parallelization of the model runs. For this study, an eight-core parallelization was used. A further increase in numbers of workers would improve the speed of the optimization.
4.2 High-frequency step and final correctionTo investigate the timing and contributions of the remaining mismatches in and temperature for scenario S1 after the high-frequency (step 3) and correction parts (step 4), different cross-correlation experiments were conducted (see Sect. S3 in the Supplement). The experiments led to equal results. The major fraction of the final mismatches of emerges from mismatches in the thermal-diffusion component . Also a cancellation effect between the gravitational component and of the total mismatch in became obvious, affecting the calculation of lag and lag and most likely leading to a fundamental residual uncertainty in the low-permeg level for the corrected data. The same analyses were conducted for all synthetic scenarios, leading to similar results.
Figure 10
Fitting of GISP2 Holocene Ar (a–d) and (e–h) data (measurement data from Kobashi et al., 2008b): (a) measured versus modelled Ar/4 time series. (b) Zoom-in for the same quantity as in panel (a). (c) Time series of the final mismatches Ar/4 of the measured minus the modelled Ar/4 data. (d) Histogram for the same quantity as in panel (c) showing an overall final mismatch (2) of 4.0 permeg and offset (os) of 0.5 permeg. (e) Measured versus modelled time series. (f) Zoom-in for the same quantity as in panel (e). (g) Time series of the final mismatches of the measured minus the modelled data. (h) Histogram for the same quantity as in panel (g) showing an overall final mismatch (2) of 3.7 permeg and offset (os) of 0.8 permeg.
[Figure omitted. See PDF]
Additionally, the influence of the window length, used for the calculation of IF, on the correction was analysed, showing that for all investigated window lengths the correction reduces the mismatches of and temperature, whatever correction mode was used (calculated with xcf, xcf or both quantities). Moreover, the correction is most efficient for window lengths in the range of 100 years to 300 years with an optimum at 200 years for all cases.
4.3 Key points to be considered for the application to real data4.3.1 Benefits of the novel gas-isotope fitting approach
In addition to the fitting of data, the algorithm is able to fit and data as well using the same basic concepts (Fig. 10). Here, the and data from Kobashi et al. (2008b) were used as the fitting targets. We reach final mismatches (2) of 4.0 permeg for /4 and 3.7 permeg for , which are for both quantities below the analytical measurement uncertainty of 4.0 to 9.0 permeg for /4 and 5.0 to 9.8 permeg for measured data (Kobashi et al., 2008b).
The automated inversion of different gas-isotope quantities (, , ) provides a unique opportunity to study the differences in the gained solutions using different targets and to improve our knowledge about the uncertainties of gas-isotope-based temperature reconstructions using a single firn model. Next, the presented algorithm is not dependent on the firn model, which leads to the implication that the algorithm can be coupled to different firn models describing firn physics in different ways. Furthermore, an automated reconstruction algorithm avoiding manual manipulation and leading to reproducible solutions makes it possible for the first time, to study and learn from the differences between solutions matching different targets. Finally, differences obtained by applying different firn physics (densification equations, convective zone, etc.) but the very same inversion algorithm may help to assess firn-model shortcomings, resulting in more robust uncertainty estimates than it was ever possible before.
In this publication, we show the functionality and the basic concepts of the automated inversion algorithm using well-known synthetic fitting targets. In this “perfect-world scenario”, the forward problem, converting surface temperature to , as well as the inverse problem, converting to surface temperature, are completely described by the used firn model. Consequently, all sources of signal noise are ignored. For the later use of the algorithm on , or measured data, this will not be the case anymore due to different sources of signal noise in the used measured data. As a result, differences between temperature solutions obtained from individual targets (, , ) will become obvious. These differences will allow us to quantify the uncertainties associated with different unconstrained processes. Next, we will list and discuss potential sources of uncertainties and try to provide suggestions for their handling and quantification in our approach.
4.3.2 Measurement uncertainty and firn heterogeneity (centimetre-scale variability)
Many studies have investigated the influence of firn heterogeneity (or density fluctuations) on measurements of air compounds and quantities (e.g. , , , , ratio, air content) extracted from ice cores resulting in centimetre-scale variability and leading to additional noise on the measured data (e.g. Capron et al., 2010; Etheridge et al., 1992; Fourteau et al., 2017; Fujita et al., 2009; Hörhold et al., 2011; Huber and Leuenberger, 2004; Rhodes et al., 2013, 2016). Using discrete measurement techniques instead of continuous sampling methods makes it difficult to quantify these effects. However, during discrete analyses of ice-core air data, it is common to measure replicates for given depths, from which the measurement uncertainties of the gas-isotope data are calculated using pooled standard deviation (Hedges and Olkin, 1985). Often, it is not possible to take real replicates (same depth), and instead the replicates are taken from nearby depths. Hence, any potential centimetre-scale variability is to some degree already included in the measurement uncertainty, because each measurement point represents the average over a few centimetres of ice. This is especially the case for low-accumulation sites or glacial ice samples for which the vertical length of a sample (e.g. 10–25 cm long for the glacial part of the NGRIP ice core; Kindler et al., 2014) covers the equivalent of 20 to 50 years of ice at approximately 35 kyr b2k. Increasing the depth resolution of the samples would increase our knowledge of centimetre-scale variability, for, e.g. identifying anomalous entrapped gas layers that could have been rapidly isolated from the surface due to an overlying high-density layer (e.g. Rosen et al., 2014). As this variability is likely due to heterogeneity in the density profile, modelling such heterogeneities (if possible at all) may not help to better reconstruct a meaningful temperature history but rather to reproduce the source of noise. This means that the potential centimetre-scale variability, in many cases, is already incorporated in the analytical noise obtained from gas-isotope measurements, due to analytical techniques themselves. Assuming the measurement uncertainty as Gaussian distributed, it is easy to incorporate this source of uncertainty in the inverse-modelling approach presented here. This will increase the uncertainty of the temperature according to Eq. (3).The same equation can also be used for the calculation of the uncertainty in temperature related to measurement uncertainty in general.
To answer the pertinent question of how to better extract a meaningful temperature history from a noisy ice-core record, an excellent – but costly – solution is of course to use multiple ice cores. For example, a -based temperature reconstruction from the combination of data from the GISP2 ice core with the “sister ice core” GRIP drilled 30 km apart is likely one of the best ways to overcome potential centimetre-scale variability. A comparison of ice cores that were drilled even closer might be even more advantageous.
4.3.3 Smoothing effects due to gas diffusion and trapping
It is known that gas-diffusion and trapping processes in the firn can smooth out fast signals and result in a damping of the amplitudes of gas-isotope signals (e.g. Grachev and Severinghaus, 2005; Spahni, 2003). The duration of gas diffusion from the top of the diffusive column to the bottom where the air is closed off in bubbles is for Holocene conditions in Greenland approximately of the order of 10 years (Schwander et al., 1997), whereas the data resolution of the synthetic targets was set to 20 years to mimic the measurement data from Kobashi et al. (2008b) with a mean data resolution of about 17 years (see Sect. 2.3: “Generating synthetic target data”). In the study of Kindler et al. (2014), it was shown that a glacial Greenland lock-in depth leads to a damping of the signal of about 30 % for a 10 K temperature rise in 20 years. We further assume that the smoothing according to the lock-in process is negligible for Greenland Holocene conditions according to the much smaller amplitude signals and shallower LID. Yet, for glacial conditions, it requires attention.
4.3.4 Accumulation-rate uncertainties
For the synthetic data study presented in this paper, it is assumed that the used accumulation-rate data are well known with zero uncertainty. This simplification is used to show the functionality and basic concepts of the presented fitting algorithm in every detail on well-known and temperature targets and to focus on the final uncertainties originating from the presented fitting algorithm itself. For the later reconstruction using measured gas-isotope data together with the published accumulation-rate scenarios shown in Sect. S1 of the Supplement, this will not be the case anymore. Uncertainties in layer counting and corrections for ice thinning will lead to a fundamental uncertainty. Especially in the early Holocene, this can easily exceed 10 %. As the accumulation-rate data are used to run the firn model, all potential accumulation uncertainties are in part incorporated into the temperature reconstruction. On the other hand, as we discussed in Sect. S2 of the Supplement, the accumulation-rate variability has a minor impact compared to the input temperature on the variability of data in the Holocene. The influence of these quantities, accumulation rate or temperature, on the temperature reconstruction is not equal; during the Holocene, accumulation-rate variability explains about 12 to 30 % of variability. A total of 30 % corresponds to the 8.2 kyr event and 12 % to the mean of the whole Holocene period including the 8.2 kyr event. Hence, the influence of accumulation changes, excluding the extreme 8.2 kyr event, is generally below 10 % during the Holocene. If the accumulation is assumed to be completely correct, then the missing part will be assigned to temperature variations. Nevertheless, for the fitting of the Holocene measurement data, we will use all three accumulation-rate scenarios as shown in Sect. S1 of the Supplement. The difference in the reconstructed temperatures arising from the differences of these three scenarios will be used for the uncertainty calculation as well and is most likely higher than the uncertainty arising from uncertainties due to the process of producing the accumulation-rate data and from the conversion of the accumulation-rate data to the GICC05 timescale.
4.3.5 Convective zone variability
Many studies have shown the existence of a well-mixed zone at the top of the diffusive firn column, called the convective zone (CZ). The CZ is formed by strong katabatic winds and pressure gradients between the surface and the firn (e.g. Kawamura et al., 2006, 2013; Severinghaus et al., 2010). The existence of a CZ changes the gravitational background signal in and Ar as it reduces the diffusive-column height. The presented fitting algorithm was used together with the two most frequently used firn models for temperature reconstructions based on stable isotopes of air, the Schwander et al. (1997) model which has no CZ built in (or better – a constant CZ of 0 m) and the Goujon firn model (Goujon et al., 2003) (which assumes a constant convective zone over time, that can easily be set in the code). This difference between the two firn models only changes significantly the absolute temperature rather than the temperature anomalies as it was shown by other studies (e.g. Guillevic et al., 2013, Fig. 3). In the presented work, we show the results using the model from Schwander et al. (1997), because the differences between the obtained solutions using the two models are negligible, except a constant temperature offset. Also noteworthy is that there is no firn model at the moment which uses a dynamically changing CZ. Indeed, this should be investigated but requires additional intense work. Additionally, the knowledge of the time evolution of CZ changes for time periods of millennia to several hundreds of millennia (in frequency and magnitude) is too poor to estimate the influence of this quantity on the reconstruction. In principle, it is possible to cancel out the influence of a potentially changing CZ by using data for temperature reconstruction, as due to the subtraction of Ar/4 from the gravitational term of the signals is eliminated. From that point of view, it will be interesting to compare temperature solutions gained from fitting with the solutions based on or alone. This can offer a useful tool for quantifying the magnitude and frequency of CZ changes in the time interval of interest.
It is known that for some very low accumulation-rate sites in areas with strong katabatic winds (e.g. “megadunes”, Antarctica) extremely deep CZs can occur, which are potentially able to smooth out even decadal-scale temperature variations (Severinghaus et al., 2010). For this, its deepness would need to be of several dozens of metres, which is highly unrealistic even for glacial Summit conditions (Guillevic et al., 2013; see discussion in Annex A4, p. 1042) as well as for the rather stable Holocene period in Greenland for which no low accumulation and strong katabatic wind situations are to be expected.
4.4 Proof of concept for glacial data
For glacial conditions, the task of reconstructing temperature (with correct frequency and magnitude) without using information is much more challenging due to the highly variable gas-age – ice-age differences (age) between stadial and interstadial conditions. Here, contrary to the rather stable Holocene period, the age can vary by several hundreds of years. Also, the accumulation-rate data are more uncertain than for the Holocene. To prove that the presented fitting algorithm also works for glacial conditions, we inverted the data measured for the NGRIP ice core by Kindler et al. (2014) for two Dansgaard–Oeschger events, namely DO6 and DO7. Since the magnitudes of those events are higher and the signals are smoother than in the Holocene, we only had to use the Monte Carlo type input generator (see Sect. 2.1.2) for changing the temperature inputs. To compare our results to the -based and manually calibrated values from Kindler et al. (2014), we use the ss09sea06bm timescale (NGRIP members, 2004; Johnsen et al., 2001) as it was done in the Kindler et al. (2014) publication. For the model spin-up, we use the accumulation-rate and temperature data from Kindler et al. (2014) for the time span 36.2 to 60 kyr. The reconstruction window (containing DO6 and DO7) is set to 32 to 36.2 kyr. As the first-guess (starting point) of the reconstruction, we use the accumulation-rate data (Acc) for NGRIP from the ss09sea06bm timescale together with a constant temperature of about 49 C for this time window. As minimization criterion for the reconstruction, we simply use the sum of the root mean squared errors of the and age mismatches weighted with their uncertainties (wRMSE) according to the following equation, instead of the mean misfit alone as used for the Holocene (Eq. 1). Here, and are the uncertainties in and age for the measured values or (age match points: Guillevic, 2013, p. 65, Table 3.2) and , the number of measurement values. We set permeg for all (Kindler et al., 2014) and years for all . The relative uncertainties in age can easily reach up to 50 % and more in the glacial period using the ss09sea06bm timescale which results in a pre-eminence of the misfits over the age misfits (10 to 20 % when using GICC05 timescale; Guillevic, 2013, p. 65 Table 3.2). Due to this issue, we have to set age uncertainties to 50 years to make both terms equally important for the fitting algorithm. In Fig. 11, we show preliminary results. The and age fitting (Fig. 11a, b) and the resulting gained temperature and accumulation-rate solutions (Fig. 11c, d) using the presented algorithm are completely independent from , which provides the opportunity to evaluate the -based reconstructions. In this study, the algorithm was used in three steps. First, starting with the first guess (constant temperature), the temperature was changed as explained before. The accumulation rate was changed in parallel to the temperature, allowing a random offset shift (up and down) together with a stretching or compressing (in direction) of the accumulation-rate signal over the whole time window (32 to 36.2 kyr). This first step leads to the “Monte Carlo solution 0” (MCS0) which provides a first approximation and is the base for the next step. For the next step, we fixed the accumulation rate and let the algorithm only change the temperature to improve the fit (MCS1). Finally, we allow the algorithm to change the temperature together with the accumulation rate using the Monte Carlo type input generator for both quantities. This allows to change the shape of the accumulation-rate data. This final step can be seen as a fine tuning of the gained solutions from the steps before. The obtained mismatches in and age of all steps are at least of the same quality or better than the -based manual method from Kindler et al. (2014) (see Table 6). The gained temperature solutions show a very good agreement in timing and magnitude compared to the reconstruction of Kindler et al. (2014). Also, the accumulation-rate solutions show that the accumulation has to be reduced significantly compared to the ss09sea06bm data to allow a suitable fit of the and age target data, a result highly similar to Guillevic et al. (2013) and Kindler et al. (2014). The mismatches in and age of the final (MCS FIN) solution show a 15 % smaller misfit of (2) and an about 31 % smaller misfit of age (2) compared to the Kindler et al. (2014) solution. Keeping in mind that the used approach is completely independent from strengthens the functionality and quality of the presented gas-isotope fitting approach also for glacial reconstructions. As this section contains a proof of concept of the presented automated gas-isotope fitting algorithm on glacial data, preliminary results and ongoing work were shown here. Furthermore, as the presented fitting algorithm was developed and tested in first order for Holocene-like data, it is highly probable that the functionality of the algorithm using glacial data will be further extended and adjusted in future studies.
Figure 11
Proof of concept for glacial reconstructions (NGRIP DO6 and DO7): (a) target plot: model output for the first-guess input (blue line), Kindler et al. (2014) fit (orange dotted line), Monte Carlo solution 0 (MCS0, yellow line), Monte Carlo solution 1 (MCS1, purple line), final Monte Carlo solution (MCS FIN, green line) and measurement target (black dotted line, measurement points are black cycles, data from Kindler et al., 2014). (b) age target plot: age model output for the first-guess input (blue line), Kindler et al. (2014) fit (orange dotted line), Monte Carlo solution 0 (MCS0, yellow line), Monte Carlo solution 1 (MCS1, purple line), final Monte Carlo solution (MCS FIN, green line) and age measurement target (black dotted line, measurement points are black cycles, data from Guillevic, 2013). (c) Temperature solution plot: first-guess input (blue line), Kindler et al. (2014) solution (orange dotted line), Monte Carlo solution 0 (MCS0, yellow line), Monte Carlo solution 1 (MCS1, purple line) and final Monte Carlo solution (MCS FIN, green line). (d) Accumulation-rate solution plot: first-guess input (blue line), Kindler et al. (2014) solution (orange dotted line), Monte Carlo solution 0 (MCS0, yellow line), Monte Carlo solution 1 (MCS1, purple line) and final Monte Carlo solution (MCS FIN, green line).
[Figure omitted. See PDF]
Table 6Proof of concept for glacial reconstruction: is the used minimization criterion (see Sect. 4.4).
Solution | Mismatch | Mean mismatch | Mismatch | Mean mismatch | |
---|---|---|---|---|---|
(2) (permeg) | (permeg) | age (2) (yr) | age (yr) | ||
Kindler et al. (2014) | 3.6 | 44.5 | 17.9 | 256 | 101 |
First guess | 7.8 | 128.7 | 63.8 | 328 | 138 |
MCS0 | 3.1 | 50.0 | 19.3 | 199 | 82 |
MCS1 | 2.9 | 44.3 | 17.6 | 200 | 84 |
MCS FIN | 2.6 | 37.8 | 15.6 | 175 | 63 |
The mean mismatches for and age were calculated according to Eq. (1).
5 ConclusionA novel approach is introduced and described for inverting a firn-densification and heat-diffusion model to fit small gas-isotope-data variations as observed throughout the Holocene. From this new fitting method, it is possible to extract the surface-temperature history that drives the firn status which in turn leads to the gas-isotope time series. The approach is a combination of a Monte Carlo based iterative method and the analysis of remaining mismatches between modelled and target data. The procedure is fully automated and provides a high potential for parallel computing for time consumption optimization. Additional sensitivity experiments have shown that accumulation-rate changes have only a minor influence on short-term variations of , which themselves are mainly driven by high-frequency temperature variations. To evaluate the performances of the presented approach, eight different synthetic time series were created from eight known temperature histories. The fitting approach leads to an excellent agreement in timing and amplitudes between the modelled and synthetic and temperature data. The obtained final mismatches follow a symmetric standard-distribution function. A total of 95 % of the mismatches compared to the synthetic data are in an envelope between 3.0 to 6.3 permeg for and 0.23 to 0.51 K for temperature, depending on the synthetic temperature scenarios. These values can therefore be used as a 2 estimate for the reconstruction uncertainty arising from the presented fitting algorithm itself. For , the obtained final uncertainties are of the same order of magnitude as state-of-the-art measurement uncertainty. The presented reconstruction approach was also successfully applied to and measured data. Moreover, we have shown that the presented fitting approach can also be applied to glacial temperature reconstructions with minor algorithm modifications. Based on the demonstrated flexibility of our inversion methodology, it is reasonable to adapt this approach for reconstructions of other non-linear physical processes.
Code and data availability
The synthetic and temperature
targets, the reconstructed and temperature data (using
the synthetic as fitting targets), and the used
accumulation rates can be found in the data supplement of this paper
available at
The supplement related to this article is available online at:
Competing interests
The authors declare that they have no conflict of interest.
Special issue statement
This article is part of the special issue “Global Challenges for our Common Future: a paleoscience perspective” – PAGES Young Scientists Meeting 2017. It is a result of the 3rd Young Scientists Meeting (YSM), Morillo de Tou, Spain, 7–9 May 2017.
Acknowledgements
This work was supported by SNF grants (SNF-159563 and SNF-172550). We would like to thank Takuro Kobashi, Philippe Kindler and Myriam Guillevic for helpful discussions about the ice-core data and model peculiarities. Edited by: Emily Dearing Crampton Flood Reviewed by: three anonymous referees
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2018. This work is published under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Greenland past temperature history can be reconstructed by forcing the output of a firn-densification and heat-diffusion model to fit multiple gas-isotope data (
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer