Content area
A surface debris layer significantly modifies underlying ice melt dependent on the thermal resistance of the debris cover, with thermal resistance being a function of debris thickness and effective thermal conductivity. Thus, these terms are required in models of sub-debris ice melt. The most commonly used method to calculate effective thermal conductivity of supraglacial debris layers applies heat diffusion principles to a vertical array of temperature measurements through the supraglacial debris cover combined with an estimate of volumetric heat capacity of the debris as presented by
1 Introduction
Debris-covered glaciers can be found in tectonically active mountain regions, such as Alaska, the European Alps, High-mountain Asia, or New Zealand , where large amounts of debris migrate into the ice via glacial and periglacial processes . Debris falling onto the ablation zone contributes directly to any surface debris load, while debris added to the glacier surface in the accumulation zone or sourced subglacially is transported englacially to the ablation area of the glacier, where it melts out and contributes additional debris load , as shown in Fig. a. In comparison to clean ice, thin or patchy debris amplifies ice melt due to its higher absorptivity of short-wave radiation, while thicker debris layers reduce ice melt due to the insulation and attenuation of the diurnal heating signal . The relationship between debris thickness and ablation rate varies for different debris layer compositions and prevailing climatological conditions but retains the same character (Fig. b). The critical debris thickness beyond which sub-debris ice ablation is inhibited compared to clean-ice ablation ranges from to mm depending on the optical and thermal properties of the debris and the ambient climate . Therefore, in contrast to clean-ice glaciers, where the melt increases towards the glacier tongue in response to typical environmental temperature lapse rates, the spatial pattern of melt of debris-covered glaciers depends more on the debris thickness than on the elevation
Figure 1
(a) Schematic of a debris-covered glacier with debris transport of subglacially sourced rock debris from the release area to the meltout area. The inset shows a classical thermal diffusivity measurement site, consisting of thermistors at several heights between the near surface and the debris–ice interface. (b) Measurements of the so-called Østrem curves for different glaciers show a common pattern of variation in daily melt rate versus the debris depth, with site-specific variations in maximum ablation and the associated debris thickness. Redrawn from .
[Figure omitted. See PDF]
Although, under certain circumstances, heat can be transferred through the debris by convection, advection, and radiation, observations
Application of this method therefore requires (1) a vertical array of temperature measurements through the supraglacial debris cover (Fig. a) for conditions in which the debris heat transfer closely approximates that of a conductive system from which the apparent is derived and (2) an estimate of the volumetric heat capacity of the debris used to convert the apparent into effective conductivity.
To meet the first requirement, a sample site must be chosen for which lateral heat transfer can reasonably be expected to be negligible, so a site that is horizontally homogeneous in factors such as slope, debris type, and thickness and without evidence of any hydrological heat transfer. Then, the observed temperatures must be evaluated to find specific time periods and vertical subsets of debris temperature profile data that are identified as being “well-behaved” approximations of a conductive system; data that show evidence of non-conductive processes can be excluded from subsequent analysis . To meet the second requirement, estimates of the debris porosity and the rock density thermal properties must be made. Commonly used values for these terms are porosity of , rock density of , and rock specific heat capacity of , with a error applied to the combined terms . Most studies assume that the pore spaces are air-filled when calculating the volumetric heat capacity, but, in principle, if the debris cover is known to be fully saturated, a water-filled case can be used to obtain the volumetric heat capacity of the sampled debris layer . In an ideal case, this workflow can yield a reliable estimate of effective thermal conductivity from a homogenous dry portion of the debris with stable meteorological forcing conditions and minimal non-conductive processes. Further use of these effective dry-debris thermal conductivity data in surface energy balance models can allow non-conductive processes and non-uniform debris layers to be included in the model structure by, for example, accounting for stratification in the debris porosity and air flow through the debris , stratification of moisture content, and associated phase changes within the debris layer .
As natural debris covers often show vertical variation in porosity, grain size, and moisture content, recent studies have explored multi-layered applications of the thermal diffusion representation of the debris layer. perform multiple rather than single regression analysis to account for (i) unknown depth variation in in a two-layer model and (ii) non-conductive heat sources/sinks. They apply various methods to synthetic datasets to highlight that applying the original method of produces large errors when trying to recover a target that varies with depth and that unequally spaced temperature measurements introduce substantial truncation errors. If unequal spacing of measurements cannot be avoided, their new Bayesian method of determining outperforms that of . also include a term for depth-varying into the heat conduction equation and perform multiple linear regression to solve for its variation with depth in natural debris cover, identifying non-conductive processes as the residual from a comparison of the observed and modelled time-dependent temperature evolution. They find non-negligible heat transfer related to air motion and latent heat fluxes within the debris on Kennicott Glacier. These approaches offer solutions for the potential of vertically varying debris properties and allow quantified assessment of non-conductive processes in measured field sites.
Despite these new developments, the method of has been historically widely used
This study explores the effect of measurement setup on values derived using the method of in order to highlight the potential dependency of published values of thermal conductivity on the spatiotemporal intervals chosen for the analysis and on the sensor precision and locational accuracy. To achieve this, we apply the method of to data generated using a forward diffusivity model for a purely conductive system with a specified value of and assess how closely the known is recovered when varying choices of instrumental and analytical setups. Since the approach recommended by is only valid for conductive systems, we focus our study on a purely conductive system to provide a baseline reference for individual method-related error sources, expanding the analysis of the impact of irregular spacings performed in to include an assessment of a wider range of field measurement choices. By isolating the individual roles of these different error sources, they can be quantified and their tendencies can be understood, thereby making possible a more critical reassessment of the extent to which differences in published effective thermal conductivity values reflect real-world differences in debris properties or instrumental and analytical choices. We provide an interactive tool (
3 Methods
3.1 Artificial data for benchmarking-derived thermal diffusivity
To test the method of for different scenarios, we generate synthetic data for debris cover thicknesses of and cm and values of and to represent a range of values obtained from previous field studies from glaciers across the globe . The interactive tool allows users to perform analyses for any alternative choice of debris thickness and . To generate data for a perfectly conductive system, we force the heat equation with five 10 d (days) surface temperature time series (Fig. ) and a 0 boundary condition for the debris ice interface. The first 2 d of temperature-forcing data is used to initialize the model, and the different debris layer thicknesses are represented by varying the number of vertical grid points in the domain while maintaining equidistant spacing.
Figure 2
Characteristics of the surface temperature forcing for the artificial data generation, which consists of 10 d time series of two analytical sine curves and three experimental temperature measurements within the debris layer. The sine curves have an average temperature of 7.5 and the same amplitude. Surface forcing from field data is derived from the uppermost thermistor, which lies 1–5 cm below the surface, as indicated in brackets. Field data 1 and 3 were recorded at Lirung Glacier (Nepal) during September 2013 ( cm below surface) and April 2014 ( cm below surface) respectively and were provided by . Field data 2 was recorded at Vernagtferner (Austria) during June 2010 ( cm below surface) and was provided by . The colour scheme of these forcings is used in subsequent figures.
[Figure omitted. See PDF]
We use the method to solve the heat conduction equation for this set of given constraints. This implicit finite-difference method is convergent second-order in time and numerically stable. The method is based on the trapezoidal rule and is a combination of the Euler forward and backward methods in time. For the thermal heat equation, it results in the following equations:
Combining these results in the Crank–Nicolson scheme, 5
Because of the implicit nature of the Crank–Nicolson scheme, an algebraic equation or linearizing the equation is necessary to solve the next time step. In our case, we can use the boundary conditions and , where represents the arbitrary temperature-forcing function (Fig. ). Although the method is unconditionally numerically stable for the heat equation , unwanted spurious oscillations can occur if the time steps are too long or the spatial resolution is too small. To avoid this, we use the following stability criterion: 6
Meeting this criterion (Eq. ) for both tested values of and all five forcing datasets (Fig. ), the simulated temperatures are produced at 5 min and 2 cm resolution with float-point precision. The resulting generated data (e.g. Fig. ) provide an ideal reference from which temperatures can be sampled in space and time to replicate field measurements from “well-behaved” portions of vertical temperature profiles within supraglacial debris, meaning subsets of the data that can be shown to closely approximate a conductive system.
Figure 3
Some 5 d examples of the artificially generated debris layer temperature time series data for the skewed sine forcing (a) and the field data 3 forcing (b) for a cm debris layer with of using the Crank–Nicolson scheme. (c, d) Daily averaged debris layer temperature profile for the full 10 d time series of the boundary conditions in the upper panels, showing that the often-used steady-state assumption of the daily mean debris layer temperature, shown by a linear temperature gradient, is only fulfilled for periodic daily temperature forcings.
[Figure omitted. See PDF]
3.2 Experiments performedWe apply the method of deriving apparent for a selected range of analytical setups as described in the following subsections. When calculating from data resampled from the synthetic cases, we calculate a single diffusivity value for the last 8 d of each forcing dataset, although the interactive tool also offers the option to calculate at a daily scale for assessment of field datasets. The calculation of the centred spatial derivatives is suitable for unequal grid spacing, but we do not include analysis of unequal vertical thermistor spacings in this study as this was presented in a previous study . The properties of the analytical setup that are varied are , , varying the precision of the temperature data, and adding Gaussian noise to assess statistical uncertainty. The performance of each experiment at recovering the known prescribed in the artificial data is assessed by calculating the relative error:
7
Positive relative error values thus correspond to an underestimation of compared to the known value. As effects of individual potential sources of error are contingent on other properties of the experimental setup, we present illustrative examples of the error tendencies and their co-dependencies over a range of properties. The full potential parameter space can be explored in the interactive tool. Firstly, the synthetic data are resampled without any added sensor or installation uncertainty to examine the behaviour of numerical truncation errors. Subsequently, the errors associated with the sensor and installation uncertainty are presented.
3.2.1 Quantifying truncation errors in space and timeIn theory, the numerical solution to the diffusion problem should be equal to the analytical solution for infinitesimally small spatial and temporal sampling intervals. Truncation errors are expected to scale with the temporal and spatial increment of the analysis with respect to the diurnal forcing cycle . Higher-order approximations would reduce the truncation error, but errors due to measurement uncertainties would dominate, as described by .
8
For , the equations are not solvable.
For the temporal truncation error, we resample the artificial data both by skipping and by averaging over an increasing (Fig. ) from 5 min (the native resolution of the artificial data) to 6 h intervals to encompass the highest- and lowest-resolution temporal sampling of published field data (Appendix ). When skipping, we select every th value and omit the rest. When averaging, we take the mean temperature over values. While most studies store samples of the thermistor data at fixed , we include an assessment of this averaging approach, as some published field data collection campaigns are based on measurements of temperatures averaged over
Figure 4
Illustrating the two different temporal resampling methods by displaying the temporal grid for different sampling intervals. We compare the method by skipping every th grid point (a, blue background) or by averaging over grid points (b, orange background).
[Figure omitted. See PDF]
3.2.2 Quantifying sensor and installation errorsThermistors used to record supraglacial debris temperature profiles over time have varying manufacturer-stipulated sensor precision, and there may be uncertainty around their exact location in the debris cover, as this can be challenging to measure with a high degree of accuracy in the field and it can change if the debris moves.
To simulate the effect of temperature measurement precision, we discretize the temperature data to correspond with the measurement precision of to , which is representative of the precision of thermistors typically used in the field. The error properties of these differing sensor precisions are examined for a range of spatiotemporal resampling, in which we ensure symmetrical resampling of by resampling from the centre of the debris layer outwards. Because the observed temperature changes and gradients are smaller at depth, it is expected that a higher precision of temperature measurement is required to capture them. Therefore we also examine how the relative error due to sensor precision varies with the depth in the debris layer at which the analysis is performed. For this, we also consider the potential gain from even higher-precision sensors by including a temperature discretization, although this is more precise than any of the thermistor properties reported in the literature.
To simulate cases where either the vertical location of the temperature measurement is inaccurate or the thermistor is displaced vertically over time, we use the sampled temperatures at float precision and add a time-invariant vertical offset to each temperature measurement position. Each offset value is randomly sampled from a Gaussian distribution with a standard deviation of cm around the true vertical measurement position to represent an inaccurate field measurement of the vertical position. If thermistors move within the debris due to settling or debris migration, the positional inaccuracy could even be larger, but this would likely be discernible from evidence of debris movement or identified when the thermistors were removed from the debris layer, allowing affected data to be excluded from further analysis. For both analyses of the effects of sensor precision and location accuracy, we present only the idealized sinusoidal-forcing data to best isolate the systematic error patterns and how they co-vary with the truncation errors established by the first analysis steps (Sect. ).
3.3 Statistical uncertainty estimation
The method of is only valid in well-behaved conductive systems; therefore the aim is to only apply the method to a time period and vertical section where this assumption is largely fulfilled. Therefore, our error analysis so far assumes the debris to be a purely conductive, vertically and horizontally homogeneous system, while, in nature, the debris cover will not be perfectly homogeneous and some non-conductive processes are expected to contribute to temperature data even in “well-behaved” sections.
To show that the model-related error sources studied remain relevant despite additional external error sources, we add random statistical noise to the data time series that we perform our analysis on. For this, we use the pure sine curve forcing for a cm thick debris layer, with of cm and of min resolution for a of . Subsequently, each individual float precision temperature value of the generated temperature time series is modified by a value randomly sampled from a Gaussian distribution with a mean value of and a standard deviation of . This procedure is repeated times to generate a small ensemble of individually perturbed temperature time series. The introduction of this statistical noise of does not account for any specific physical processes, since non-conductive processes and effects due to spatial inhomogeneity would produce systematic temperature shifts on a multi-hourly to seasonal timescale, as observed in some field datasets . The is rather selected to statistically perturb the model system and simulate the effect of additional errors. By increasing or decreasing the selected value, the effect of the perturbation is respectively amplified or attenuated, but the general impact remains the same. The data are analysed as in the previous sections by varying the temporal sampling interval and the vertical position in the debris layer for three selected vertical grid spacings (, , cm) to capture the co-dependencies of the error properties with these measurement choices. The temporal resampling is performed by skipping to preserve the maximum temperature perturbations to illustrate the effects of a maximum perturbation. When resampling by averaging, the perturbed values would equal out for longer temporal averaging periods. For each parameter combination, the mean of is calculated from the ensemble with a respective standard deviation to display the value spread.
4 Results
While the interactive tool provided allows a full range of sampling strategies to be explored, here we present results for selected cases within the range of realistic instrumental setups. Our focus is to provide illustrative examples that characterize the error properties of each individual source.
4.1 Error due to temporal truncation
We illustrate the behaviour of the temporal truncation error calculated for a cm thick debris layer with of for up to 6 h sampling intervals for both skipping and averaging resampling methods. As few field studies use as small as our cm resolution artificial data, we show an example with of cm to better represent field observations. We show the behaviour at two depths within the debris layer to illustrate the depth dependency of the error behaviour.
The relative error in due to temporal truncation error shows a general pattern of monotonic increase with increasing for the skipping method (Fig. a and b). Consistently positive relative errors indicate that increasing the temporal sampling interval systematically underestimates . At shallow depths, the less sinusoidal the temperature forcing is, the larger the error at all sampling intervals (illustrated by the cm depth cases shown in Fig. a and c). At great depths, the error for the sinusoidal forcing remains similar to that in the near surface, while the noisy surface diurnal signals are smoothed at depth and the associated error tends to be more similar to that of the sinusoidal surface forcing (illustrated by the cm depth cases shown in Fig. b and d). When data are resampled by averaging, the temporal truncation error is very similar for the sine curve, but, for the noisy-field-forcing data, averaging reduces the error compared to the skipping resampling method (Fig. c and d). These patterns of error behaviour are also seen for of .
Figure 5
Relative temporal truncation error of recovering using different temporal sampling intervals: comparison of different temperature forcings for skipping (a, b: blue boundary) and averaging (c, d: orange boundary) resampling methods for two different depths in the m debris layer with a target of .
[Figure omitted. See PDF]
Considering the maximum relative error produced by typical field installations, we can take the case of calculating diffusivity at a point as close to the surface as is reasonably possible at cm, requiring a thermistor spacing of cm combined with the longer typical time sampling interval of 1 h and calculating over a period with noisy surface forcing. This combination yields a maximum temporal truncation relative error of . To minimize the error from a truncation perspective, a minimum temporal resolution is desirable, and selecting days with surface temperature forcing that is closer to sinusoidal will decrease errors that may otherwise be significant at shallow depths.
4.2 Error due to spatial truncationWe illustrate the behaviour of the spatial truncation error calculated for a 100 cm thick debris layer with of and , for up to cm, using a sample of the five surface-forcing datasets at of 5 min.
Spatial truncation error values (Fig. ) remain quasi-constant for low , up to when the centred differencing scheme spans more than cm, and thereafter increase rapidly with increasing . The spatial truncation error is relatively insensitive to the different surface temperature forcings and, in contrast to the temporal truncation error, does not vary markedly with debris depth. Instead, the imposes a strong influence, with higher having smaller errors, shifting the respective curves to the right as shown for the case of the sinusoidal forcing in Fig. . Given that the diffusivity is the target of sensor installations, this parameter cannot be known in advance, and the results suggest that of below 14 cm is desirable to minimize spatial truncation errors across a range of potential . The consistently positive error values mean that the spatial source of truncation error also has the tendency to systematically underestimate , increasingly so with more widely spaced temperature measurements.
Figure 6
Comparison of the spatial truncation error for two different values and forcing types, calculated for the central position in a m debris layer for symmetrically increasing . For clarity, we show only one curve for the higher diffusivity value, as all curves are shifted similarly when varying the target . The forcing datasets are at float precision with of 5 min.
[Figure omitted. See PDF]
4.3 Error due to thermistor precisionTo illustrate the role of temperature sensor precision, we firstly focus on the range of sensor spacings that are not affected by the spatial truncation error, i.e. for up to cm (Fig. ), and show the relative error for a ranging from 5 min to several hours. The error due to temperature discretization is generally less pronounced for smaller temperature discretizations, representing greater thermistor precision. Maximum errors occur for small values of , decreasing to stable relative errors of % for cm, above which the error also decreases systematically with decreasing temporal sampling interval. Values of between the dominant spatial truncation error (Sect. ) and the error due to the sensor precision are desirable, so between ca. and cm for the representative parameter space explored in our analyses.
Figure 7
Relative error of estimated due to thermistor temperature discretization of and for vertical sampling intervals up to m and resampled by skipping (a, b) and averaging (c, d) for the intervals shown in the legend, such that the 5 min dataset is identical for both methods. The case presented is a centred sampling of a m thick layer with target of forced with a sinusoidal surface forcing.
[Figure omitted. See PDF]
The depth dependency of the error associated with discretization indicates the importance of high-precision sensors for sampling the debris at depth (Fig. ). For a of cm, only measurements with a maximum thermistor uncertainty of would produce correct values and then only for the first cm of debris. Increasing to cm, the relative error decreases for all curves. Still, the thermistors used in most field experiments, which have reported precision ranging from to , would not produce correct values at depth. For the case shown, it would become difficult to obtain reliable values at depths beyond cm even with high-precision thermistors. The error behaviour is dependent on capturing temperature gradients sufficiently well, so the specific error limits are dependent on the amplitude of the surface-forcing fluctuations and diffusivity and the chosen discretization and spatiotemporal sampling. For a given discretization, meaningful values can be obtained at greater depth by enlarging the , but higher-precision sensors are always an advantage. As for both types of truncation error, the sensor precision error systematically underestimates the target .
Figure 8
Relative error of estimated due to thermistor discretization by depth for a m debris layer, with 5 min sinusoidal surface forcing for of (a) cm and (b) cm, with the different coloured lines corresponding to different values of temperature discretization.
[Figure omitted. See PDF]
4.4 Error due to vertical thermistor position inaccuracy report that a vertical error of cm would result in a marginal temperature difference of and for their measurement setups. They and others
Our analysis, however, shows that low-accuracy knowledge of the temperature measurement location could produce a systematic error for smaller . For example, in the relatively rare case that sensors are installed with a of 2 cm, the resultant error on calculated values of effective thermal diffusivity is so large that the data would become unusable. With increasing , the relative error decreases, such that the mean over the depth of the layer recovers the target value. This error source is the only one in this study that has the potential to increase values, as shown in Fig. by the spread of above the known reference value.
4.5 Statistical uncertainty estimation
In contrast to the noise-free case shown in Fig. , with the addition of statistical noise, the relative temporal truncation error now increases with depth, and the predominance of relative errors in the sub-hourly range can now only be recovered in the near-surface portion of the debris layer (Fig. 10). The standard deviation of the error curves nearer the surface is less than a few percent of the relative error, therefore showing a minimal ensemble spread, while, at depth, the ensemble spread is larger. From this, we can see that, where the random noise introduced is large compared to the spatiotemporal temperature gradients, as is the case at greater depths in the debris layer, the method is essentially no longer applicable. Increasing the decreases the relative error found at depth but has little impact on the smaller errors nearer the surface. At larger , even the near-surface values now have a non-zero relative error for short ; this is due to the spatial truncation error of the vertical sampling interval as displayed in Fig. coming into play, while, at greater depth in the debris, the larger decreases the relative error, although this still remains with a large relative error and standard deviation values of . In our example, the combination with the most precise recovery of the target , with relative error approaching zero, was for cm at an cm depth and at a min temporal sampling interval. For this combination, the relative error due to temporal truncation error increases to and at of and min respectively.
Displaying the noise-induced relative error of more explicitly in relation to the depth in the debris over the span of the shared calculation range (0.18–0.82 cm) highlights that there are characteristic transition zones between where the method is still applicable and where it is not, and, as these scale with the relative magnitude of the noise, the transition location is dependent on the used in the analysis (Fig. ), along with the amplitude of the surface forcing, the diffusivity, and the temperature discretization. For example, for the uppermost section of the artificial debris layer, all curves with and min sampling intervals provide relative errors below , while, in the data combination we show, the transition to relative error for these sampling intervals is , , and m depth for vertical grid spacings of , , and cm respectively. Therefore, as is the case for the depth dependence of temperature discretization (Fig. ), increasing the increases the depth at which meaningful values can be recovered when noise is present. However, increasing the grid spacing also results in a truncation error, which is visible in Fig. c as a vertical displacement in relative error values additional to the displacement caused by the temporal truncation error.
5 Discussion
In previously published data, most apparent thermal diffusivity derived using the method of are below , typically ranging from to , with some outlier values exceeding
While the interactive tool accompanying our analysis allows a wider range of the parameter space to be explored, the cases we present were chosen to characterize the main numerical error sources inherent in the method within the parameter space of published values (Appendix ). The numerical and measurement implementation error sources investigated here all tend to systematically underestimate , while the relative error associated with uneven thermistor spacing
Figure 9
Illustrating the influence of thermistor displacement on estimated by randomly displacing locations by a normal distribution with a standard deviation of cm over a range of vertical spacing intervals. The true/target thermal diffusivity is shown by the horizontal black line, showing that, for small temperature sampling intervals, sensor displacement results in large inaccuracies in .
[Figure omitted. See PDF]
In general, the numerical errors associated with applying this method are all related to how well the temperature gradients in space and time within the debris cover can be captured by the instrumental setup. Temporal truncation errors in the absence of statistical noise are typically in most expected deployment settings at sampling intervals of min. Near-surface measurements suffer more error because the diurnal temperature cycle at the surface is the most non-sinusoidal and therefore produces larger temporal truncation errors. Consequently, conditions that more closely approximate sinusoidal conditions (i.e. clear-sky stable atmospheric conditions) reduce the errors in the near-surface layers, but this becomes less relevant at depth, as surface noise introduced by weather is progressively smoothed out at greater depth in the debris. Spatial truncation due to the choice of thermistor spacing is not very sensitive to the non-sinusoidal forcing but becomes at above cm for the range of reported in the literature, and the error is larger for smaller . The range at which errors are small and similar regardless of the forcing and is cm, providing a conservative upper bound to limit spatial truncation errors. Even though a or would produce a minimal truncation error, sampling intervals that are too small can also produce erroneous results because, for a , the linear regression coefficient of determination decreases strongly. In practice, this is not a problem for the temporal sampling, since short temporal sampling intervals can always be resampled afterwards. A more significant problem occurs if low-precision thermistors are positioned too close to each other, especially if the profile comprises only a few thermistors, making it impossible to spatially resample the temperature data. While this effect diminishes to a stable value of relative error for above cm, with increasing depth, the thermistors must be further apart, otherwise the thermistor measurement uncertainty dominates the measurement. Therefore, although the highest-precision thermistors should always be chosen if possible, using thermistors with maximum precision becomes even more important at greater depths in the debris layer. The only error source investigated here that has the potential to overestimate is that due to inaccurate temperature measurement location. This can happen due to poorly measured positions or due to debris settling after sensor installation if the thermistor profiles are installed on a slope, which is subject to gradual gravitational sliding or reworking. In contrast to , who showed that a constant error in the thermistor position was not important to the analysis, we find that, at least for very small , the calculated does depend on the thermistor positions relative to each other being correctly known and sustained over the measurement period. However, thermistors are typically placed more a than a few centimetres apart; this error source might be expected to have little effect if the is calculated at several levels in the debris cover, as the mean value of the location-perturbed cases recovers the target diffusivity. Introducing a statistical noise term highlights the manner in which noise degrades the temperature gradients that the method relies on, particularly at greater depths in the debris, where temperature variations in space and time are small compared to the introduced noise term. Thus, care must also be taken to assess if the method is being applied to portions of the debris layer where the gradients are well captured.
Figure 10
Relative errors of thermal diffusivity of statistically perturbed ensemble data for a m debris layer varied by temporal sampling interval for three different depths in the debris layer and three different values (, , cm). The ensemble consists of cases, with each individual temperature value being perturbed by a Gaussian distribution with a standard deviation of . The solid line is the mean relative error value, and the shaded background represents the standard deviation of the relative error.
[Figure omitted. See PDF]
Figure 11
Relative errors of thermal diffusivity of statistically perturbed ensemble data for a m debris layer varied by the depth in the debris layer for three different sampling intervals and vertical grid spaces of (, , cm). The ensemble consists of 20 individual runs of each 8 d, with each individual temperature value being perturbed by a Gaussian distribution with a standard deviation of . The solid line is the mean relative error value, and the shadowed background represents the standard deviation of the relative error.
[Figure omitted. See PDF]
In the best-practice guidelines (Appendix ), we address all sources of methodological error discussed in this paper, suggesting optimal implementation strategies for future field studies that wish to deploy these methods of analysing representative thermal conductivity of natural debris layers following the method of . Our recommendations differ somewhat from those of , as the purpose is different. While sought to determine the optimal method to determine sub-debris ablation rates directly from temperature sensors using a minimal number of thermistors, we seek to understand the best way to determine a representative from which effective thermal conductivity suitable for onward use in generalized surface energy balance models can be derived. For their purpose, they propose to “set the sensor spacing to be one-fifth of the debris thickness at the location”; however, the non-linear nature of the single-error sources presented in this paper indicates that we cannot generalize such statements if the goal is parameter determination rather than direct ablation determination. Furthermore, they stated “the top sensor should be placed approximately at the middle of the debris layer”, as this captures the relevant flux being delivered to the underlying ice. Our analysis indicates that, while it is true that thermistors too close to the surface produce large truncation errors, the same is valid for thermistors that are too deep, as the temperature gradient is too small relative to the thermistor precision. By providing an open-source interactive tool that can be used to explore all the methodological sources of error in implementing the most widely used method of determining , we offer a ready-to-use means for determining the field setup that minimizes these numerical methodological errors. The intention is that, prior to a new field deployment, the error response of the expected conditions of debris thickness, surface-forcing amplitude, sensor number, and precision can be explored and the best possible field deployment of sensors can be made.
In addition to the errors related to measurement setup and analysis procedure investigated in this study, non-conductive processes within the debris layer (e.g. rain, phase changes) can also be present . Unfortunately, it is not always clear in the published literature that the thermal diffusivities and associated thermal conductivity values were derived from optimal conditions sampled within the dataset. The suitability of the sampled debris temperature profiles for determining debris thermal parameters must be carefully evaluated on a case-by-case basis, using meteorological data and closely evaluating the measurements and their gradient functions in order to establish that the data subset represents predominantly conductive conditions, before applying the method of . Once a suitable effective thermal conductivity is established based on “well-behaved” conditions, these base values can be modified for implementation within a surface energy balance model to account for changes in the pore fluid type to allow simulation of varying wet-/dry-debris conditions .
The recently published database of supraglacial debris properties, DebDab v1 , reveals that, from the values of debris thermal conductivity, only report an associated uncertainty, and, while include the debris layer thickness, only report on details such as the thermistor depth. To facilitate the intercomparison of these data, it would be valuable to include the temporal sampling used, along with the rock properties and porosity used to convert to thermal conductivity. Deeper consideration and potential common reanalyses of these data would require the original thermistor data to be publicly available, which is not always the case. Reanalysing previously published vertical temperature profiles with common resampling strategies, based on the findings of this study, would facilitate intercomparison of values, while reanalysis using the methods of and/or might yield more robust and representative global values by providing respectively a more rigorous assessment of non-conductive processes and inclusion of multi-layered thermal properties within the natural debris layers that have been sampled.
6 Conclusion provide a practical method to estimate thermal diffusivity values from a vertical array of thermistors in the supraglacial debris layer, which is applicable for spatially homogenous debris and behaves as a close approximation to a purely conductive system. Although this method has become the standard method for determining effective thermal conductivity to be used in surface energy balance models of sub-debris ice ablation
To address this, we provide an open-source tool (
In this paper, we used this tool to provide illustrative examples of the magnitude and tendencies of the systematic errors associated with individual instrumental and analytical choices. Based upon our findings, we provide a set of best-practice guidelines (Appendix ) to minimize systematic errors in applying the method of . While recent publications highlight limitations of the simplest deployment of the heat diffusion equation in natural debris layers due to the role of non-conductive processes and internal debris stratification , our analysis and best-practice guidelines show the sampling strategies that will yield the best results, provided that the temperatures underpinning the analyses demonstrably sample conditions that closely approximate a homogeneous conductive system. Our analysis also highlights that it is challenging to interpret derived debris thermal properties if the sensor and the analysis system are not reported and accounted for. In the light of this, we encourage more rigorous reporting of implementation strategies and uncertainty in order to facilitate cross-comparison of reported results.
Appendix A Best-practice guidelines
Our analysis leads us to the following best-practice guidelines to help other researchers to get as much as possible out of their measurements.
Thermistor precision
Use a temperature sensor with the highest possible precision, but not exceeding 0.1 K.
Debris layer thickness
To determine a representative thermal diffusivity from which robust, generally applicable thermal conductivity values can be derived, sampling a minimum of cm but ideally deeper (e.g. cm) debris thickness is advised. The maximum depth that can be meaningfully sampled is limited by the thermistor precision and temperature gradients in the debris layer, which can be simulated beforehand using the tool provided.
Number of thermistors
The method requires at least three thermistors, but more thermistors make it possible to calculate diffusivity values for different depths and therefore make it possible to identify non-conductive processes or other inconsistencies within the debris layer. With only three temperature sensors, it is difficult to assess if the sampled debris meets the requirement of closely approximating a conductive system. A second redundant set of thermistors can also be helpful to rule out measurement errors.
Thermistor installation
A site should be chosen that is not expected to be subject to gravitational reworking or sliding of the debris and where lateral heat fluxes are expected to be minimal. Thermistors should be placed at equal vertical intervals of to cm. Even though the uppermost layer often does not produce ideal results, it can be helpful to place a thermistor at or near the debris surface to provide surface-forcing data. Depending on the depth, the thermal diffusivity, and the temperature gradient of the debris layer, the method produces more significant errors with a greater depth, limiting the depth where it makes sense to place thermistors. The sweet spot can be determined by simulating the debris layer of interest beforehand with model parameters from previous measurements or other estimations.
Thermistor recovery
Thermistors have to be carefully extracted and their vertical positions have to be carefully recorded at the end of the measurement period to make sure that they have not moved in the debris while being deployed. In cases where the thermistors have moved, it might be necessary to discard the dataset. Therefore, mounting thermistors to a thermally insulated rod or set of rods so that their positions are fixed is a valuable approach to eliminate this potential error source.
Temporal sampling interval
One should sample with a temporal resolution as short as possible and then average over a 5 min period. Over such a short period, the temperature is assumed to be nearly constant and therefore not to reduce gradients. By averaging the temperature over a short interval, discretization is reduced.
Measurement duration and conditions
The measurement duration and conditions depend on the scientific objective and seasonality, but at least 1 week of suitable stable meteorological conditions is needed. Therefore, if one has unlucky conditions, a measurement duration of several months could be necessary. A shorter period of predominantly sinusoidal surface temperature forcing, with evidence that non-conductive processes are minimal, is the best way to obtain robust values, so avoiding periods of precipitation, seasonal change, and phase change is advised.
Appendix B Field measurement overview
Table B1Overview table of thermal diffusivity field measurement sites. DT: debris layer thickness; SR: sampling rate; AC: thermistor accuracy; TM: number of thermistors. Data from , , , , , and .
| Site | Glacier | Year | DT | SR | AC | TM | Thermistor | Start | Days |
|---|---|---|---|---|---|---|---|---|---|
| ID | (m) | (min) | () | (#) | (m) | date | (#) | ||
| KH1a | Khumbu | 2014 | 2.8 | 30 | 8 | 0.1, 0.25, 0.4, 0.55, 0.7, 0.8, 0.9, 1.0 | 2014-05-10 | 188 | |
| KH1b | Khumbu | 2015 | 2.8 | 30 | 8 | 0.1, 0.25, 0.4, 0.55, 0.7, 0.8, 0.9, 1.0 | 2014-11-21 | 328 | |
| KH2a | Khumbu | 2014 | 0.7 | 30 | 8 | 0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7 | 2014-05-13 | 184 | |
| KH2b | Khumbu | 2015 | 0.8 | 30 | 9 | 0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 | 2015-10-20 | 338 | |
| KH4 | Khumbu | 2014 | 0.3 | 30 | 4 | 0.02, 0.11, 0.22, 0.3 | 2014-05-20 | 180 | |
| KH5 | Khumbu | 2015 | 0.7 | 30 | 8 | 0.0, 0.0, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7 | 2015-10-20 | 205 | |
| CN1 | Changri Nup | 2016 | 0.1 | 30 | 3 | 0.01, 0.05, 0.1 | 2015-11-28 | 450 | |
| CN2 | Changri Nup | 2016 | 0.08 | 30 | 2 | 0.01, 0.08 | 2015-11-28 | 450 | |
| CNW1 | Changri N. (W.) | 2010 | 0.1 | 30 | 4 | 0.025, 0.05, 0.075, 0.1 | 2010-10-31 | 698 | |
| CNW2 | Changri N. (W.) | 2012 | 0.125 | 30 | 4 | 0.05, 0.075, 0.1, 0.125 | 2012-12-05 | 723 | |
| CNW3a | Changri N. (W.) | 2014 | 0.21 | 30 | 3 | 0.01, 0.16, 0.21 | 2014-11-30 | 309 | |
| CNW3b | Changri N. (W.) | 2015 | 0.26 | 30 | 3 | 0.02, 0.2, 0.26 | 2015-11-27 | 33 | |
| CNW3c | Changri N. (W.) | 2017 | 0.1 | 30 | 3 | 0.01, 0.05, 0.1 | 2017-11-26 | 347 | |
| CNW3d | Changri N. (W.) | 2018 | 0.14 | 30 | 3 | 0.02, 0.1, 0.14 | 2018-11-11 | 379 | |
| NG1 | Ngozumpa | 2002 | 2.2 | 30 | 6 | 0.0, 0.22, 0.33, 0.45, 0.65, 0.77 | 2001-11-13 | 323 | |
| NG2 | Ngozumpa | 2015 | 2.0 | 360 | 11 | 0.01, 0.2, 0.4, 0.6, 0.8, 1, 1.2, 1.4, 1.6, 1.8, 2 | 2014-12-06 | 484 | |
| IM4 | Imja-Lhotse S. | 2014 | 1.6 | 30 | 5 | 0.0, 0.1, 0.2, 0.4, 0.83 | 2014-05-31 | 162 | |
| IM11 | Imja-Lhotse S. | 2014 | 0.45 | 30 | 5 | 0.0, 0.05, 0.1, 0.2, 0.36 | 2014-05-31 | 162 | |
| IM13 | Imja-Lhotse S. | 2014 | 0.33 | 30 | 4 | 0.0, 0.05, 0.1, 0.2 | 2014-05-31 | 162 | |
| IM14 | Imja-Lhotse S. | 2014 | 0.26 | 30 | 3 | 0.0, 0.05, 0.24 | 2014-05-31 | 162 | |
| ILS1 | Imja-Lhotse S. | 2013 | 0.3 | 30 | 6 | 0.0, 0.05, 0.1, 0.15, 0.2, 0.3 | 2013-09-14 | 11 | |
| ILS2 | Imja-Lhotse S. | 2013 | 0.47 | 30 | 7 | 0.0, 0.05, 0.1, 0.15, 0.2, 0.3, 0.47 | 2013-09-14 | 11 | |
| ILS3 | Imja-Lhotse S. | 2013 | 0.36 | 30 | 6 | 0.0, 0.05, 0.1, 0.15, 0.2, 0.36 | 2013-09-14 | 11 | |
| ILS4 | Imja-Lhotse S. | 2013 | 0.4 | 30 | 7 | 0.0, 0.05, 0.1, 0.15, 0.2, 0.3, 0.4 | 2013-09-14 | 11 | |
| LG1a | Lirung | 2013 | 0.38 | 5 | 3 | 0.1, 0.2, 0.3 | 2013-09-24 | 9 | |
| LG1b | Lirung | 2013 | 0.4 | 5 | 3 | 0.01, 0.1, 0.4 | 2013-12-05 | 7 | |
| LG1c | Lirung | 2014 | 0.4 | 5 | 3 | 0.01, 0.1, 0.4 | 2014-04-06 | 13 | |
| LG2a | Lirung | 2013 | 0.42 | 5 | 3 | 0.05, 0.15, 0.35 | 2013-09-20 | 13 | |
| LG2b | Lirung | 2013 | 0.4 | 5 | 3 | 0.01, 0.1, 0.4 | 2013-12-05 | 7 | |
| LG2c | Lirung | 2014 | 0.4 | 5 | 3 | 0.01, 0.1, 0.4 | 2014-04-07 | 12 | |
| SDF1 | Suldenferner | 2014 | 0.6 | 30 | 3 | 0.0, 0.02, 0.06 | 2014-07-30 | 54 | |
| SDF2 | Suldenferner | 2014 | 0.12 | 60 | 5 | 0.0, 0.03, 0.06, 0.09, 0.12 | 2014-09-26 | 319 | |
| SDF3 | Suldenferner | 2014 | 0.24 | 60 | 6 | 0.04, 0.08, 0.12, 0.16, 0.20, 0.24 | 2014-09-26 | 319 | |
| SDF4 | Suldenferner | 2014 | 1 | 60 | 6 | 0.0, 0.2, 0.4, 0.6, 0.8, 1.0 | 2016-09-25 | 278 | |
| BG | Belvedere | 2003 | 0.27 | 15 | 4 | 0.04, 0.11, 0.23, 0.27 | 2003-06-24 | 42 | |
| LB_dry | Larsbreen | 2002 | 0.65 | 10 | 8 | 0.0, 0.09, 0.19, 0.29, 0.38, 0.53, 0.61, 0.75 | 2002-07-21 | 5 | |
| LB_exp | Larsbreen | 2002 | 0.65 | 10 | 5 | 0.1, 0.2, 0.3, 0.4, 0.5 | 2002-07-09 | 12 | |
| LB_sat | Larsbreen | 2002 | 0.65 | 10 | 8 | 0.0, 0.05, 0.1, 0.15, 0.2, 0.3, 0.35, 0.4 | 2002-07-03 | 6 | |
| VF1 | Vernagtferner | 2010 | 0.08 | 5 | 3 | 0.04, 0.06, 0.08 | 2010-06-24 | 83 | |
| VF2 | Vernagtferner | 2010 | 0.18 | 5 | 3 | 0.07, 0.11, 0.15 | 2010-06-24 | 82 |
Code availability
The codes for this study are publicly available at
Author contributions
This publication is based on the MSc thesis of CB, supervised by LN. LN conceived the study, and CB performed the analysis, developed the interactive tool, produced the figures, and led the preparation of the article. Both CB and LN worked to finalize the article for publication.
Competing interests
The contact author has declared that neither of the authors has any competing interests.
Disclaimer
Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation in this paper. While Copernicus Publications makes every effort to include appropriate place names, the final responsibility lies with the authors.
Acknowledgements
Field datasets used for temperature forcing in this analysis (Fig. ) were provided by Mohan Chand, Rijan Kayastha, Martin Juen, and Christoph Mayer. In the course of the Masters thesis analysis, further forcing data were provided by members of the IACS working group on debris-covered glaciers (
We thank the handling editor, Ben Marzeion, for his support during the revision process and for accepting our revised article. We also thank Argha Banerjee and the three anonymous reviewers for their valuable comments during the review process, which helped further refine our article.
Financial support
The data collection by Lindsey Nicholson was supported by the Austrian Science Fund (FWF) under grant nos. V309 and P28521.
Review statement
This paper was edited by Ben Marzeion and reviewed by Argha Banerjee and three anonymous referees.
© 2025. This work is published under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.