Introduction
The Earth system is currently in a state of rapid warming that is unprecedented even in geological records . This change is primarily driven by the rapid increase in atmospheric concentrations of greenhouse gases (GHGs) due to anthropogenic emissions since the industrial revolution . Changes in natural physical and biological systems are already being observed , and efforts are made to determine the “anthropogenic impact” on particular (extreme weather) events . Nowadays, the question is not so much if but by how much and how quickly the climate will change as a result of human interference, whether this change will be smooth or bumpy and whether it will lead to dangerous anthropogenic interference with the climate .
The climate system is characterized by positive feedbacks causing instabilities, chaos and stochastic dynamics and many details of the processes determining the future behavior of the climate state are unknown. The debate on action on climate change is therefore focused on the question of risk and how the probability of dangerous climate change can be reduced. In scientific and political discussions, targets on “allowable” warming (in terms of change in global mean surface temperature, GMST, relative to pre-industrial conditions
We define pre-industrial temperature as the 1861–1880 mean temperature, in accordance with IPCC AR5.
) have turned out to be salient. The K warming threshold is commonly seen – while gauging considerable uncertainties – as a safe threshold to avoid the worst effects that might occur when positive feedbacks are unleashed . Indeed, in the Paris COP21 conference it was agreed to attempt to limit warming below K . It is, however, questionable whether the commitments made by countries (the so-called nationally determined contributions, NDCs) are sufficient to keep temperatures below the 1.5 K and possibly even the 2.0 K target .A range of studies has appeared to provide insight into the safe level of cumulative emissions to stay below either the or K target at a certain time in the future with a specified probability, usually taken as the year 2100. The choice of a particular year is necessarily arbitrary and neglects the possibility of additional future warming. Early studies made use of Earth System Models of Intermediate Complexity (EMICs; ) to obtain such estimates. Because it was found that peak warming depends on cumulative carbon emissions, , but is independent of the emission pathway , focus has been on the specification of a safe level of values corresponding to a certain temperature target. In more recent papers, also emulators derived from either C4MIP models or CMIP5 (Coupled Model Intercomparison Project 5) models , with specified emission scenarios, were used for this purpose. Such a methodology was recently used in to argue that a post-2015 value of GtC would limit post-2015 warming to less than 0.6 C (so meeting the 1.5 K target) with a probability of 66 %.
In this paper we pose the following question: assume one wants to limit warming to a specific threshold in the year 2100, while accepting a certain risk tolerance of exceeding it, then when, at the latest, does one have to start to ambitiously reduce fossil fuel emissions? The point in time when it is “too late” to act in order to stay below the prescribed threshold is called the point of no return (PNR; ). The value of the PNR will depend on a number of quantities, such as the climate sensitivity and the means available to reduce emissions. To determine estimates of the PNR, a model is required of global climate development that (a) is accurate enough to give a realistic picture of the behavior of GMST under a wide range of climate change scenarios, (b) is forced by fossil fuel emissions, (c) is simple enough to be evaluated for a very large number of different emission and mitigation scenarios and (d) provides information about risk, i.e., it cannot be purely deterministic.
The models used in are clearly too idealized to determine adequate estimates of the PNR under different conditions. In this paper, we therefore construct a stochastic state-space model from the CMIP5 results where many global climate models were subjected to the same forcing for a number of climate change scenarios . This stochastic model – representing all kinds of uncertainties in the climate model ensemble – is then used together with a broad range of mitigation scenarios to determine estimates of the PNR under different risk tolerances.
showed that if the Paris Agreement temperature targets are to be met, only a few years are left for policy makers to take action by cutting emissions: with an emissions reduction rate of % year, the K target has become unachievable and the K target becomes unachievable after 2017. The analysis highlights the crucial concept of the closing door or PNR of climate policy, but it is deterministic. It does not take account of the possibility that these targets are not met, and does not allow for negative emissions scenarios. We here show how the considerable climate uncertainties captured by our stochastic state-space model, the degree to which policy makers are willing to take risk, and the potential of negative emissions affect the carbon budget and the date at which climate policy becomes unachievable (the PNR). The climate policy is here not defined as an exponential emission reduction as in but as a steady increase in the share of renewable energy in total energy generation.
Methods
We let be the annual-mean area-weighted global mean surface
temperature (GMST) deviation from pre-industrial conditions of which the
1861–1880 mean is considered to be representative
. From the CMIP5 scenarios we use the
simulations of the pre-industrial control, abrupt quadrupling of atmospheric
, smooth increase of 1 % year and the RCP
(representative concentration pathway) scenarios 2.6, 4.5, 6.0 and 8.5
. The data are obtained from the German Climate Computing
Center (DKRZ), the ESGF Node at the DKRZ and KNMI's Climate Explorer. The
forcings (concentrations, ; and
emissions, ) are
obtained from the RCP Database (available at
As all CMIP5 models are designed to represent similar (physical) processes but use different formulations, parameterizations, resolutions and implementations, the results from different models offer a glimpse into the (statistical) properties of future climate change, including various forms of uncertainty. We perceive each model simulation as one possible, equally likely, realization of climate change. Applying ideas and methods from statistical physics , in particular linear response theory (LRT), a stochastic model is constructed that represents the CMIP5 ensemble statistics of GMST.
Linear response theory
We only use those ensemble members from CMIP5 for which the control run and at least one perturbation run are available, leading to 34 members for the abrupt ( quadrupling) and 39 for the smooth-forcing experiment. Considering those members from the RCP runs also available in the abrupt forcing run, we have 25 members for RCP2.6, 30 for RCP4.5, 19 for RCP6.0 and 29 for RCP8.5.
The concentration as a function of time for the abrupt quadrupling and smooth increase is prescribed as with time in years from the start of the forcing, pre-industrial concentration and Heaviside function . The radiative forcing due to relative to pre-industrial conditions is given as with Wm . With LRT, the Green's function for the temperature response is computed from the abrupt forcing case as the time derivative of the mean response where . The temperature deviation from the pre-industrial state for any forcing is then obtained, via the convolution of the Green's function, as Because Eq. () is exact, we expect that Eq. () with will exactly reproduce the abrupt CMIP5 response. In addition, for the LRT to be a useful approximation, the response has to reasonably reproduce the smooth % year CMIP5 response with . Figure a shows that LRT applied to the abrupt perturbation perfectly recovers the abrupt response – as required – and is well able to recover the response to a smooth forcing. The correspondence is very good for the mean response and also the variance is captured quite well.
Ensemble mean (a) and variance (b) of temperature response from CMIP5 (solid) and LRT reproduction (dashed). Year gives the start of the perturbation. (c) Reconstruction of RCP temperature evolution from concentration pathways using only. Blue, orange and green lines gives CMIP5 data for RCP4.5, RCP6.0 and RCP8.5, respectively, with the ensemble mean given in black solid (RCP4.5), dotted (RCP6.0) and dashed (RCP8.5) black. Reconstruction using radiative forcing in red (RCP4.5), purple (RCP6.0) and brown (RCP8.5).
[Figure omitted. See PDF]
Beyond finding the temperature change as a result of variations, eventually emissions, , cause these changes and have to be addressed explicitly. A multi-model study of many carbon models of varying complexity under different background states and forcing scenarios was recently presented . A fit of a three-timescale exponential with constant offset was proposed for the ensemble mean of responses to a GtC emission pulse to a present-day climate of the form Coefficients and timescales are determined using least-square fits on the multi-model mean. The concentration then follows from In doing so, we use a response function that is independent of the size of the impulse, i.e., the carbon cycle reacts in the same way to pulses of all sizes other than GtC. This is of course a simplification, especially as very large pulses might unleash positive feedbacks to do with the saturation of natural sinks such as the oceans , but works reasonably well in the range of emissions we are primarily interested in.
The full (temperature and carbon) LRT model is summarized as and relates fossil emissions, , to mean GMST perturbation with initial conditions for and for GMST perturbation. This is quite a simple model with few “knobs to turn”. The only really free parameter is the constant that scales up -radiative forcing to take into account non-fossil and non- GHG emissions (not present in the idealized scenarios), and matches the carbon and temperature models (estimated from different model ensembles) together.
The constant was found in order to optimize the agreement of with CMIP5 RCPs. The resulting reconstruction of temperatures from RCP concentrations overlaid with CMIP5 data (Fig. c) gives a good agreement.
Stochastic state-space model. Carbon model on the left, temperature model on the right. denotes the Wiener process.
Internally, emissions need to be converted from GtC year to ppm year using the respective molar masses and the mass of the Earth's atmosphere as ppm year = GtC year with ppm GtC. Our estimates of the model's 10 parameters are found in Table .
Stochastic state-space model parameters. All timescales are in years, the carbon model amplitudes are dimensionless for in (ppm year) and the temperature model amplitudes are in (K year Wm).
(year) | (year) | (year) | ||||
---|---|---|---|---|---|---|
0.2173 | 0.2240 | 0.2824 | 0.2763 | 394.4 | 36.54 | 4.304 |
(ppm) | (K year Wm) | (K year Wm) | (K year Wm) | (year) | (year) | |
278 | 0.00115176 | 0.10967972 | 0.03361102 | 400 | 1.42706247 | |
Wm | (ppm year) | (K year) | year | (year) | ||
1.48 | 5.35 | 0.65 | 0.015 | 0.13 | 8.02118539 |
In Fig. we show the results obtained for RCP emissions. For very-high-emission scenarios we underestimate concentrations because for such emissions natural sinks saturate, which is a process the pulse-size independent carbon response function cannot adequately capture. However, the upscaling of radiative forcing is quite successful, yielding a good temperature reconstruction.
Reconstruction of RCP results using the response function model. In all panels, solid lines refer to RCP4.5, dotted to RCP6.0 and dashed lines to RCP8.5. Black lines show RCP data while colors (blue: RCP4.5, orange: RCP6.0, green: RCP8.5) give our reconstruction. (a) Fossil emissions. (b) concentrations from RCP and reconstructed using . (c) Total anthropogenic radiative forcing (black) and radiative forcing from only (red) (both from RCP) and reconstructed forcing using the relations above. (d) Temperature perturbation from CMIP5 RCP (ensemble mean) and the our reconstruction.
[Figure omitted. See PDF]
Stochastic state-space model
The model outlined above still contains a data-based temperature response function and it informs only about the mean CMIP5 response. However, our main motivation is to obtain new insights into the possible evolution to a “safe” carbon-free state and such paths necessarily depend strongly on the variance of the climate and on the risk one is willing to take. This variance in temperature is quite substantial, as is evident from Fig. b and c. Therefore we translate our response function model to a stochastic state-space model and incorporate the variance via suitable stochastic terms.
The response function from the 140-year abrupt quadrupling ensemble is well approximated by Although , we require a finite for temperatures to stabilize at some level. Hence, we choose a long timescale years that cannot really be determined from the 140-year abrupt forcing (CMIP5) runs. By writing the LRT model can be transformed into the 7-dimensional stochastic state-space model (SSSM) shown in Table with parameters in Table . Initial conditions are obtained by running the noise-free model forward from pre-industrial conditions ( and ) to present-day, driven by historical emissions
These are the fossil
fuel and cement production emissions from ,
The major benefit of this formulation is that we can include stochasticity. We introduce additive noise to the carbon model such that the standard deviation of the model response to an emission pulse as reported by is recovered. For the temperature model we introduce (small) additive noise to recover the (small) CMIP5 control run standard deviation. In the CMIP5 RCP runs the ensemble variance increases with rising ensemble mean. This calls for the introduction of (substantial) multiplicative noise, which we introduce in , letting these random fluctuations decay over an 8-year timescale. The magnitude of these fluctuations is (especially at high temperatures) likely to be unrealistic when looking at individual time series. However, the focus here is on ensemble statistics.
Transition pathways
The SSSM described in the previous section is forced with fossil emissions. We assume that, in the absence of any mitigation actions, emissions increase from their initial value at an exponential rate year due to economic and population growth. Political decisions cause emissions to decrease from starting year onward as fossil energy generation is replaced by non-GHG producing forms such as wind, solar and water (mitigation ) and by an increasing share of fossil energy sources the emissions of which are not released but captured and stored away by carbon capture and storage (abatement ).
In addition, negative emission technologies may be employed. They cause a direct reduction in atmospheric concentration and are here modeled as an exponential
For long timescales, these (after a transient) constant negative emissions may not be realistic. However, we are interested in the period until the year 2100.
. We model this in a very simple way by letting both mitigation and abatement increase linearly until emissions are brought to zero: with constants and respectively giving the mitigation and abatement rates at the start of the scenario and the incremental year-to-year increase. The simplified model (Eq. 14) is very well able (not shown) to reproduce the integrated assessment model (IAM) pathways that fulfill the NDCs until 2030 and afterwards reach the 2 K target with a 50–66 % probability . These pathways are exemplary for those that continue on the low-commitment path for a while, followed by strong and decisive action. From them we obtain a family of negative emission scenarios out of which we pick a pathway with strong negative emissions. Using the starting year 2061, it is very well approximated by setting GtC and year.Point of no return
With the emission scenarios and the SSSM – returning concentrations and GMST for any such scenario – one can now address the issue of transitioning from the present-day (year 2015) to a carbon-free era such as to avoid catastrophic climate change. We need to take into account both the target threshold and the risk one is willing to take to exceed it. The maximum amount of cumulative emissions that allows for reaching the 1.5 and K targets, as a function of the risk tolerance, is called the safe carbon budget (SCB). It is well established in the literature but does not contain information on how these emissions are spread in time. This is where the PNR comes in: the PNR is the point in time where starting mitigating action is insufficient to stay below a specified target with a chosen risk tolerance.
Concretely, let the temperature target be the maximum allowable warming and denote the parameter as the probability of staying below a given target (a measure of the risk tolerance). For example, the case K and corresponds to a 90 % probability of staying below K warming, i.e., 90 of 100 realizations of the SSSM, started in 2015 and integrated until 2100, do not exceed K in the year 2100.
Then, in the context of Eq. (14), the PNR is the earliest that does not result in reaching the defined “Safe State” in terms of and . It is determined from the probability distribution of GMST in 2100.
Both SCB and PNR depend on temperature target, climate uncertainties and risk tolerance, but the PNR also depends on the aggressiveness of the climate action considered feasible (here given by the value of ). This makes the PNR such an interesting quantity, since the SCB does not depend on the time path of emission reductions.
Clearly there is a close connection between the PNR and the SCB. Indeed, one could define a PNR also in terms of the ability to reach the SCB. The one-to-one relation between cumulative emissions and warming gives the PNR in “carbon space”. Its location in time, however, depends crucially on how fast a transition to a carbon-neutral economy is feasible.
For details on the scenarios, we refer to . With carbon budgets rapidly running out and the PNR approaching fast, negative emissions may have to become an essential part of the policy mix. Such policies are cheap but may only be a temporary fix and lead to undesirable spillover effects on neighboring countries (e.g., ). We abstract from these discussions here since this is beyond the scope of the present paper.
Results
To demonstrate the quality of the SSSM we initialize it at pre-industrial conditions, run it forward and compare the results with those of CMIP5 models. The SSSM is well able to reproduce the CMIP5 model behavior under the different RCP scenarios (Fig. , shown for RCP2.6 and RCP4.5). As these scenarios are very different in terms of rate of change and total cumulative emissions, this is not a trivial finding. It is actually remarkable that the SSSM, which is based on a limited amount of CMIP5 model ensemble members, performs so well. As an example, the RCP2.6 scenario contains substantial negative emissions, responsible for the downward trend in GMST, which our SSSM correctly reproduces. The mean response for RCP8.5 is slightly underestimated (not shown) because the uncertainty in the carbon cycle plays a rather minor role compared to that in the temperature model. In addition, for such large emission reductions, positive feedback loops set in from which our SSSM abstracts. The temperature perturbation is very closely log-normally distributed, while for weak forcing scenarios (e.g., RCP2.6 and RCP4.5) the distribution is approximately Gaussian. The concentration is found to be Gaussian distributed for all RCP scenarios. These findings (log-normal temperature and Gaussian concentration) result from the multiplicative and additive noise in temperature and carbon components of the SSSM, respectively.
Stochastic state-space model applied to RCP scenarios. (a, b) Ensemble mean and 5th and 95th percentile envelopes of CMIP5 RCPs (blue) and stochastic model (orange). (c) Probability density functions for in 2100 based on 5000 ensemble members, and driven by forcing from RCP2.6 (blue), RCP4.5 (orange), RCP6.0 (green) and RCP8.5 (red). In black are fitted log-normal distributions.
[Figure omitted. See PDF]
The safe carbon budget. in 2100 such that as a function of cumulative emissions for different . The black curve gives the deterministic results with noise terms in the stochastic model set to zero.
[Figure omitted. See PDF]
To determine the SCB, 6000 emission reduction strategies (with ) were generated and, using the SSSM, an 8000-member ensemble for each of these emission scenarios starting in 2015 was integrated. Emission scenarios are generated from Eq. (14) by letting , a uniform and drawn from a beta distribution (with distribution function , where is the beta function; parameters are chosen as ), with the [0,1] interval scaled such that at the latest in 2080. The beta distribution is chosen for practical reasons to sample pairs. As is drawn from a uniform distribution, doing likewise for would result in many pathways with very quick mitigation and low cumulative emissions. Choosing a beta distribution for makes draws of small much more likely and leads to a better sampling of high cumulative emission scenarios. The choice of distribution has no consequences on the results.
The temperature anomaly in 2100 () as a function of cumulative emissions is shown in Fig. . The same calculation is also shown for the deterministic case without climate uncertainty (no noise in the SSSM). In Fig. , the SCB is given by the point on the axis where the (colored) line corresponding to a chosen risk tolerance crosses the (horizontal) line corresponding to a chosen temperature threshold . The curves (Fig. ) are very well described by expressions of the type with suitable coefficients and , each depending on the tolerance . For the range of emissions considered here, a linear fit would be reasonable . However, our expression also works for cumulative emissions in the range of business as usual (when fitting parameters on suitable emission trajectories). From Fig. we easily find the SCB for any combination of and , as shown in Table .
Safe Carbon Budget (in GtC since 2015) as a function of threshold and safety probability .
0.5 | 0.67 | 0.9 | 0.95 | Noise-free | |
---|---|---|---|---|---|
K | 247 | 198 | 107 | 69 | 233 |
K | 492 | 424 | 298 | 245 | 469 |
Allowable emissions are drastically reduced when enforcing the target with a higher probability (following the horizontal lines from right to left in Fig. ). These results show in particular the challenges posed by the K compared to the K target.
From IPCC-AR5 we find cumulative emissions post-2015 of to GtC in order to “likely” stay below K while we find an SCB of GtC for K and which lies in the same range. Like we find approximately GtC to stay below K with .
To determine the PNR, we resort to three illustrative choices to model the abatement and mitigation rates with .
Following Eq. (14) we construct fast mitigation (FM) and moderate mitigation (MM) scenarios with and , respectively. In addition, in an extreme mitigation (EM) scenario can be reached instantaneously. This corresponds to the most extreme physically possible scenario and serves as an upper bound.
When varying to find the PNR for the three scenarios, we always keep and at 2015 values .
As an example, leads to total cumulative emissions from 2015 onward of 109, 183 and 335 GtC for the mitigation scenarios EM, FM and MM, respectively. MM is the most modest scenario, but it is actually quite ambitious, considering that with in 2005 and in 2015 the current year-to-year increases in the share of renewable energies are very small.
Figure shows the probabilities for staying below the 1.5 and K thresholds in 2100 as a function of for different policies, including FM () and MM (), while the EM policy bounds the unachievable region. It is clear that this region is larger for the 1.5 K than for the 2.0 K target, and shrinks when including negative emissions. From the plot we can directly see the consequences of delaying action until a given year. For example, if policy makers should choose to implement the MM strategy only in 2040, the chances of reaching the 1.5 K (2.0 K) target are only 2 % (47 %). We conclude that the remaining “window of action” may be small, but a window still exists for both targets. For example, the K target is reached with a probability of even when starting MM is delayed until 2035. However, reaching the K target appears unlikely as MM would be required to start in 2018 for a probability of 67 %. When requiring a high () probability, it is impossible to reach with the MM scenario. The PNR for the different targets and probabilities is shown in Table and Fig. .
Point of no return as a function of threshold and safety probability without and with strong negative emissions.
0.5 | 0.67 | 0.9 | 0.95 | noise-free | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|
none | strong | none | strong | none | strong | none | strong | none | strong | ||
EM | K | 2038 | 2046 | 2034 | 2042 | 2026 | 2035 | 2022 | 2032 | 2037 | 2045 |
K | 2056 | 2062 | 2051 | 2058 | 2042 | 2049 | 2038 | 2046 | 2055 | 2061 | |
FM | K | 2032 | 2039 | 2027 | 2036 | 2020 | 2028 | 2016 | 2025 | 2030 | 2038 |
K | 2050 | 2056 | 2045 | 2052 | 2036 | 2043 | 2032 | 2039 | 2048 | 2055 | |
MM | K | 2022 | 2029 | 2018 | 2026 | – | 2019 | – | – | 2021 | 2029 |
K | 2040 | 2046 | 2035 | 2042 | 2026 | 2033 | 2022 | 2030 | 2038 | 2045 |
The point of no return. Probability of staying below the 1.5 K (a, c) or 2.0 K (b, d) threshold when starting emission reductions in a given year, for different policies as described by Eq. (14) with different choices for , the rate of mitigation increase per year. Top and bottom panels show the cases without and with strong negative emissions, respectively. The point of no return for a given policy is given by the point in time where the probability drops below a chosen threshold. The default threshold of two-thirds is dashed. The unachievable region is bounded by the extreme mitigation scenario.
[Figure omitted. See PDF]
Including strong negative emissions delays the PNR by 6–10 years, which may be very valuable especially for ambitious targets. For example, one can then reach K with a probability of up to 66 % in the MM scenario when acting before 2026, 8 years later than without.
The PNR varies substantially for slightly different temperature targets. This also illustrates the importance of the temperature baseline relative to which is defined, as has been found previously . Switching to a (lower) 18th century baseline increases current levels of warming by K and thereby brings forward the PNR. For example, for a maximum temperature threshold of K, the PNR moves from 2022 to 2016 in the MM scenario and from 2038 to 2033 for the EM scenario.
It is clear that an energy transition more ambitious than RCP2.6 is required to stay below K with some acceptable probability, and whether that is feasible is doubtful. For all other RCP scenarios, exceeding K is very likely in this century (Fig. ).
(a, b) Instantaneous probability to exceed 1.5 K (a) and 2.0 K (b) for different emission scenarios. RCP scenarios are shown as dashed lines while solid lines give MM scenario results starting in 2025 (red) and 2040 (brown). Dashed horizontal lines give and 0.67, respectively. (c) Fossil fuel emissions in GtC for the same scenarios.
[Figure omitted. See PDF]
The parameter sensitivities of SCB and PNR were determined by varying each parameter by . Table shows the results for selected parameters for a small ( K, ), intermediate ( K, ) and large ( K, ) SCB, corresponding to a close, intermediate and far PNR.
The biggest sensitivities are found for the radiative forcing parameter . The parameters of the carbon model () do not have big impacts on the found SCB, on the order of 0–17 GtC, with larger numbers found for larger absolute values of SCB. The temperature-model parameters are more important, changing the SCB by up to around 10 % for large and 50 % for small values. The model is particularly sensitive to changes in the intermediate timescale (). The PNR sensitivities are generally small. We find the most relevant, yet small, sensitivities in the temperature model parameters. For example, a 10 % error in can move the PNR by 3–4 years.
The sensitivity of SCB and PNR to the noise amplitudes is small, with largest values found for the multiplicative noise amplitude that is responsible for most of the spread of the temperature distribution. Increasing noise amplitudes decreases the SCB, in accordance with the expectation that larger climate uncertainty leads to tighter constraints.
It is useful to remember that the stochastic formulation of our model is designed with the explicit purpose to incorporate parameter uncertainty in a natural way via the noise term, without having to make specific assumptions on the uncertainties of individual parameters.
Sensitivity of the safe carbon budget (SCB) and point of no return (PNR) to selected parameter variations. Values as difference in GtC (SCB) and number of years (PNR) relative to the undisturbed value (top row). The PNR values all refer to the EM scenario. First and second numbers give parameter decrease and increase, respectively.
SCB | PNR | |||||
---|---|---|---|---|---|---|
1.5 K, 0.95 | 1.5 K, 0.67 | 2.0 K, 0.67 | 1.5 K, 0.95 | 1.5 K, 0.67 | 2.0 K, 0.67 | |
undisturbed | 69 | 198 | 424 | 2022 | 2034 | 2051 |
3, 3 | 8, 8 | 16, 17 | 1, 0 | 0, 1 | 2, 1 | |
1, 1 | 2, 3 | 5, 8 | 0, 0 | 0, 0 | 1, 0 | |
4, 3 | 4, 4 | 4, 6 | 1, 0 | 0, 1 | 1, 0 | |
4, 4 | 6, 6 | 8, 11 | 1, 0 | 0, 1 | 1, 0 | |
56, 45 | 73, 59 | 104, 86 | 5, 4 | 6, 5 | 8, 6 | |
12, 12 | 19, 19 | 27, 28 | 1, 1 | 1, 2 | 2, 2 | |
32, 28 | 37, 33 | 54, 49 | 3, 3 | 3, 3 | 4, 3 | |
12, 12 | 19, 18 | 27, 28 | 2, 1 | 1, 2 | 3, 1 | |
38, 33 | 38, 34 | 55, 50 | 4, 3 | 3, 3 | 4, 3 | |
10, 10 | 0, 0 | 1, 2 | 2, 1 | 0, 0 | 0, 0 |
Summary, discussion and conclusions
We have developed a novel stochastic state-space model (SSSM) to accurately capture the basic statistical properties (mean and variance) of the CMIP5 RCP ensemble, allowing us to study warming probabilities as a function of emissions. It represents an alternative to the approach that contains stochasticity in the parameters rather than the state. Although the model is highly idealized, it captures simulations of both temperature and carbon responses to RCP emission scenarios quite well.
A weakness of the SSSM is the simulation of temperature trajectories beyond 2100 and for high-emission scenarios. The large multiplicative noise factor leads – especially at high mean warmings – to immensely volatile trajectories that in all likelihood are not physical (on the individual level, the distribution is still well-behaved). It might be worthwhile to investigate how this could be improved. Another weakness in the carbon component of the SSSM is that the real carbon cycle is not pulse-size independent. Hence, using a single constant response function has inherent problems, in particular when running very high-emission scenarios. This is because the efficiency of the natural carbon sinks to the ocean and land reservoirs is a function of both temperature and the reservoir sizes. The SSSM therefore has slight problems reproducing concentration pathways (Fig. ), a price we accept to pay as we focus on the CMIP5 temperature reproduction.
Taking account of non- emissions more fully beyond our simple scaling and also avoiding temporary overshoots of the temperature caps would reduce the carbon budgets and thus lead to earlier PNRs than given here. Therefore the values might be a little too optimistic.
In , the authors draw a different conclusion from studying a similar problem. They introduce, in their FAIR model, response functions that dynamically adjust parameters based on warming to represent sink saturation. Consequently, their model gives much better results in terms of concentrations. It would be an interesting lead for future research to conduct our analysis (in terms of SCB and PNR) with other simple models (such as FAIR or MAGICC) to discover similarities and differences. However, only rather low-emission scenarios are consistent with the 1.5 or K targets, so we do not expect such nonlinearities to play a major role, and indeed our carbon budgets are very similar to .
The concept of a point of no return introduces a novel perspective into the discussion of carbon budgets that is often centered on the question of when the remaining budget will have “run out” at current emissions. In contrast, the PNR concept recognizes the fact that emissions will not stay constant and can decay faster or slower depending on political decisions.
With these caveats in mind, we conclude that, first, the PNR is still relatively far away for the K target: with the MM scenario and we have 17 years left to start. When allowing for setting all emissions to zero instantaneously, the PNR is even delayed to the 2050s. Considering the slow speed of large-scale political and economic transformations, decisive action is still warranted, as the MM scenario is a large change compared to current rates. Second, the PNR is very close or passed for the K target. Here more radical action is required – 9 years remain to start the FM policy to avoid a K increase with a 67 % chance, and strong negative emissions give us 8 years under the MM policy.
Third, we can clearly show the effects of changing and the mitigation scenario. Switching from 1.5 to K buys an additional years. Allowing a one-third, instead of a one-tenth, exceedance risk buys an additional 7–9 years. Allowing for the more aggressive FM policy instead of MM buys an additional 10 years. This allows us to assess trade-offs, for example, between tolerating higher exceedance risks and implementing more radical policies.
Fourth, negative emissions can offer a brief respite but only delay the PNR by a few years, not taking into account the possible decrease in effectiveness of these measures in the long term .
In this work a large ensemble of simulations was used in order to average over stochastic internal variability. This allows us to determine the point in time where a threshold is crossed at a chosen probability level. Such an ensemble is not possible for more realistic models, nor do GCMs agree on details of internal variability. Therefore, in practice, the crossing of a threshold will likely be determined with hindsight and using long temporal means. This fact should lead us to be more cautious in choosing mitigation pathways.
We have shown the constraints put on future emissions by restricting GMST increase below 1.5 or 2 K, and the crucial importance of the safety probability. Further (scientific and political) debate is essential on what are the right values for both temperature threshold and probability. Our findings are sobering in light of the bold ambition in the Paris Agreement, and add to the sense of urgency to act quickly before the PNR has been crossed.
The study is based on publicly available data sets as described in the Methods section. Model and analysis scripts and outputs are available on request from the corresponding author.
MA and HAD developed the research idea, MA developed the model and performed the analysis. All authors discussed the results and contributed to the writing of the paper.
The authors declare that they have no conflict of interest.
Acknowledgements
We thank the focus area “Foundations of Complex Systems” of Utrecht University for providing the finances for the visit of Frederick van der Ploeg to Utrecht in 2016. Matthias Aengenheyster is thankful for support by the German Academic Scholarship Foundation. Henk A. Dijkstra acknowledges support by the Netherlands Earth System Science Centre (NESSC), financially supported by the Ministry of Education, Culture and Science (OCW), Grant no. 024.002.001. Edited by: Christian Franzke Reviewed by: two anonymous referees
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2018. This work is published under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
If the Paris Agreement targets are to be met, there may be very few years left for policy makers to start cutting emissions. Here we calculate by what year, at the latest, one has to take action to keep global warming below the
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 Atmospheric, Oceanic and Planetary Physics, Department of Physics, Oxford University, Oxford, UK
2 Institute for Marine and Atmospheric Research Utrecht, Department of Physics, Utrecht University, Utrecht, the Netherlands; Centre for Complex Systems Studies, Utrecht University, Utrecht, the Netherlands
3 Centre for the Analysis of Resource Rich Economies, Department of Economics, Oxford University, Oxford, UK