Eur. Phys. J. C (2016) 76:96DOI 10.1140/epjc/s10052-015-3864-0
Regular Article - Experimental Physics
http://crossmark.crossref.org/dialog/?doi=10.1140/epjc/s10052-015-3864-0&domain=pdf
Web End = http://crossmark.crossref.org/dialog/?doi=10.1140/epjc/s10052-015-3864-0&domain=pdf
Web End = Killing the cMSSM softly
Philip Bechtle1,a, Jos Eliel Camargo-Molina2,b, Klaus Desch1,c, Herbert K. Dreiner1,3,d, Matthias Hamer4,e, Michael Krmer5,f, Ben OLeary6,g, Werner Porod6,h, Bjrn Sarrazin1,i, Tim Stefaniak7,j, Mathias Uhlenbrock1,k, Peter Wienemann1,l
1 Physikalisches Institut, University of Bonn, Bonn, Germany
2 Department of Astronomy and Theoretical Physics, Lund University, 223-62 Lund, Sweden
3 Bethe Center for Theoretical Physics, University of Bonn, Bonn, Germany
4 Centro Brasileiro de Pesquisas Fisicas, Rio de Janeiro, Brazil
5 Institute for Theoretical Particle Physics and Cosmology, RWTH Aachen, Aachen, Germany
6 Institut fr Theoretische Physik und Astrophysik, University of Wrzburg, Wrzburg, Germany
7 Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, CA 95064, USA
Received: 28 August 2015 / Accepted: 22 December 2015 The Author(s) 2016. This article is published with open access at Springerlink.com
Abstract We investigate the constrained Minimal Super-symmetric Standard Model (cMSSM) in the light of constraining experimental and observational data from precision measurements, astrophysics, direct supersymmetry searches at the LHC and measurements of the properties of the Higgs boson, by means of a global t using the program Fittino.As in previous studies, we nd rather poor agreement of the best t point with the global data. We also investigate the stability of the electro-weak vacuum in the preferred region of parameter space around the best t point. We nd that the vacuum is metastable, with a lifetime signicantly longer than the age of the Universe. For the rst time in a global t of supersymmetry, we employ a consistent methodology to evaluate the goodness-of-t of the cMSSM in a frequentist approach by deriving p values from large sets of toy experiments. We analyse analytically and quantitatively the impact of the choice of the observable set on the p value, and in particular its dilution when confronting the model with a large number of barely constraining measurements. Finally,
a e-mail: mailto:[email protected]
Web End [email protected]
b e-mail: mailto:[email protected]
Web End [email protected]
c e-mail: mailto:[email protected]
Web End [email protected]
d e-mail: mailto:[email protected]
Web End [email protected]
e e-mail: mailto:[email protected]
Web End [email protected]
f e-mail: mailto:[email protected]
Web End [email protected]
g e-mail: mailto:[email protected]
Web End [email protected]
h e-mail: mailto:[email protected]
Web End [email protected]
i e-mail: mailto:[email protected]
Web End [email protected]
j e-mail: mailto:[email protected]
Web End [email protected]
k e-mail: mailto:[email protected]
Web End [email protected]
l e-mail: mailto:[email protected]
Web End [email protected]
for the preferred sets of observables, we obtain p values for the cMSSM below 10 %, i.e. we exclude the cMSSM as a model at the 90 % condence level.
1 Introduction
Supersymmetric theories [1,2] offer a unique extension of the external symmetries of the Standard Model (SM) with spinorial generators [3]. Due to the experimental constraints on the supersymmetric masses, supersymmetry must be broken. Supersymmetry allows for the unication of the electromagnetic, weak and strong gauge couplings [46]. Through radiative symmetry breaking [7,8], it allows for a dynamical connection between supersymmetry breaking and the breaking of SU(2)U(1), and thus a connection between the unica
tion scale and the electroweak scale. Furthermore, supersymmetry provides a solution to the ne-tuning problem of the SM [9,10], if at least some of the supersymmetric particles have masses below or near the TeV scale [11]. Furthermore, in supersymmetric models with R-parity conservation [12, 13], the lightest supersymmetric particle (LSP) is a promising candidate for the dark matter in the universe [14,15].
Of all the implementations of supersymmetry, there is one which has stood out throughout, in phenomenological and experimental studies: the constrained Minimal Supersym-metric Standard Model (cMSSM) [16,17]. As we show in this paper, eventhough it is a simple model with a great set of benets over the SM, it has come under severe experimental pressure. To explain and for the rst time to quantify this pressure is the aim of this paper.
The earliest phenomenological work on supersymmetry was performed almost 40 years ago [12,13,1820] in the
123
96 Page 2 of 22 Eur. Phys. J. C (2016) 76:96
framework of global supersymmetry. Due to the mass sum rule [21], simple models of breaking global supersymmetry are not viable. One set of realistic models employs local supersymmetry, or supergravity [16,2224], on which we focus here. Another possible solution to the mass sum rule problem, are the widely studied models on gauge mediated supersymmetry breaking [2527]. The cMSSM is an effective parametrisation motivated by realistic supergravity models. Since we wish to critically investigate the viability of the cMSSM in detail here, it is maybe in order to briey recount some of its history.
The cMSSM as we know it was rst employed in [28] and then actually called cMSSM in [29]. However, it is based on a longer development in the construction of realistic supergravity models. A globally supersymmetric model with explicit soft supersymmetry breaking [30] added by hand was rst introduced in [31]. It is formulated as an SU(5) gauge theory, but is otherwise already very similar to the cMSSM, as we study it at colliders. It was however not motivated by a fundamental supergravity theory. A rst attempt at a realistic model of spontaneously breaking local supersymmetry and communicating it with gravity mediation is given in [32]. At tree-level, it included only the soft breaking gaugino masses. The soft scalar masses were generated radiatively. The soft breaking masses for the scalars were rst included in [33,34]. Here both the gauge symmetry and supersymmetry are broken spontaneously [24]. In [34] the rst locally supersym-metric grand unied model was constructed. Connecting the breaking of SU(2)U(1) to supersymmetry breaking was
rst presented in [7], this included for the rst time the biand trilinear soft-breaking B and A terms. Radiative electroweak symmetry breaking was given in [8]. A systematic presentation of the low-energy effects of the spontaneous breaking of local supersymmetry, which is communicated to the observable sector via gravity mediation is given in [35,36].
Thus all the ingredients of the cMSSM, the ve parameters M0, M1/2, tan , sgn(), A0 were present and understood in early 1982. Here M0 and M1/2 are the common scalar and gaugino masses, respectively, and A0 is a common trilinear coupling, all dened at the grand unied scale. The ratio of the two Higgs vacuum expectation values is denoted by tan , and is the superpotential Higgs mass parameter. Depending on the model of supersymmetry breaking there were various relations between these parameters. By the time of [29], no obvious simple model of supersymmetry breaking had been found, and it was more appropriate to parametrise the possibilities for phenomenological studies, in terms of these ve parameters. In many papers the minimal supergravity model (mSUGRA) is often deemed synonymous with the cMSSM. However, more precisely mSUGRA contains an additional relation between A0 and M0 reducing the number of parameters [37].
The cMSSM is a very well-motivated, realistic and concise supersymmetric extension of the SM. Despite the small number of parameters, it can incorporate a wide range of phenomena. To nd or to exclude this model has been the major quest for the majority of the experimental and phenomenological community working on supersymmetry over the last 25 years.
In a series of Fittino analyses [3841] we have confronted the cMSSM to precision observables, including in particular the anomalous magnetic moment of the muon, (g 2), astrophysical observations like the direct dark mat
ter detection bounds and the dark matter relic density, and collider constraints, in particular from the LHC experiments, including the searches for supersymmetric particles and the mass of the Higgs boson.
Amongst the previous work on understanding the cMSSM in terms of global analyses, there are both those applying frequentist statistics [4262] and Bayesian statistics [6374]. While the exact positions of the minima depend on the statistical interpretation, they agree on the overall scale of the preferred parameter region.
We found that the cMSSM does not provide a good description of all observables. In particular, our best t predicted supersymmetric particle masses in the TeV range or above, i.e. possibly beyond the reach of current and future LHC searches. The precision observables like (g 2) or
the branching ratio of B meson decay into muons, BR(Bs
), were predicted very close to their SM value, and no signal for dark matter in direct and indirect searches was expected in experiments conducted at present or in the near future.
According to our analyses, the Higgs sector in the cMSSM consists of a light scalar Higgs boson with SM-like properties, and heavy scalar, pseudoscalar and charged Higgs bosons beyond the reach of current and future LHC searches. We also found that the LHC limits on supersymmetry and the large value of the light scalar Higgs mass drives the cMSSM into a region of parameter space with large ne tuning. See also [7579] on ne-tuning. We thus concluded that the cMSSM has become rather unattractive and dull, providing a bad description of experimental observables like (g 2) and predicting grim prospects for a discovery of
supersymmetric particles in the near future [80].
While our conclusions so far were based on a poor agreement of the best t points with data, as expressed in a rather high ratio of the global 2 to the number of degrees of freedom, there has been no successful quantitative evaluation of the poor agreement in terms of a condence level. Thus, the cMSSM could not be excluded in terms of frequentist statistics due to the lack of appropriate methods or the numerical applicability.
Traditionally, a hypothesis test between two alternative hypotheses, based on a likelihood ratio, would be employed
123
Eur. Phys. J. C (2016) 76:96 Page 3 of 22 96
for such a task. An example for this is e.g. the search for the Higgs boson, where the SM without a kinematically accessible Higgs as a null hypothesis is compared to an alternative hypothesis of a SM with a given accessible Higgs boson mass. However, in the case employed here, there is a signicant problem with this approach: The SM does not have a dark matter candidate and thus is highly penalised by the observed cold dark matter content in the universe. (It is actually excluded.) Thus, the likelihood ratio test will always prefer the supersymmetric model with dark matter against the SM, no matter how bad the actual goodness-of-t might be.
Thus, in the absence of a viable null hypothesis without supersymmetry, in this paper we address this question by calculating the p value from repeated ts to randomly generated pseudo-measurements. The idea to do this has existed before (see e.g. [81]), but due to the very high demand in CPU power, specic techniques for the re-interpretation of the parameter scan had to be developed to make such a result feasible for the rst time. In addition to the previously employed observables, here we included the measured Higgs boson signal strengths in detail. We nd that the observed p value depends sensitively on the precise choice of the set of observables.
The calculation of a p value allows us to quantitatively address the question, whether a phenomenologically nontrivial cMSSM can be distinguished from a cMSSM which, due to the decoupling nature of SUSY, effectively resembles the SM plus generic dark matter.
The paper is organised as follows. In Sect. 2 we describe the method of determining the p value from pseudo measurements. The set of experimental observables included in the t is presented in Sect. 3. The results of various ts with different sets of Higgs observables are discussed in Sect. 4. Amongst the results presented here are also predictions for direct detection experiments of dark matter, and a rst study of the vacuum stability of the cMSSM in the full area preferred by the global t. We conclude in Sect. 5.
2 Methods
In this section, we describe the statistical methods employed in the t. These include the scan of the parameter space, as well as the determination of the p value. Both are nontrivial, because of the need for O(109) theoretically valid scan points in the cMSSM parameter space, where each point uses about 1020 s of CPU time. Therefore, in this paper optimised scanning techniques are used, and a technique to re-interpret existing scans in pseudo experiments (or toy studies) is developed specically for the task of determining the frequentist p value of a SUSY model for the rst time.
2.1 Performing and interpreting the scan of the parameter space
In this section, the specic Markov chain Monte Carlo (MCMC) method used in the scan, the gure-of-merit used for the sampling, and the (re-)interpretation of the cMSSM parameter points in the scan is explained.
2.1.1 Markov chain Monte Carlo method
The parameter space is sampled using a MCMC method based on the MetropolisHastings algorithm [8284]. At each tested point in the parameter space the model predictions for all observables are calculated and compared to the measurements. The level of agreement between predictions and measurements is quantied by means of a total 2, which in this case corresponds to the Large Set of observables introduced in Sect. 3.11:
2 = [parenleftbig]Omeas Opred[parenrightbig]T cov1 [parenleftbig]Omeas Opred[parenrightbig] + 2limits,
(1)
where Omeas is the vector of measurements, Opred the corresponding vector of predictions, cov the covariance matrix including theoretical uncertainties and 2limits the sum of all 2 contributions from the employed limits, i.e. the quantities for which bounds, but no measurements are applied. Off-diagonal elements in the covariance matrix only occur in the sector of Higgs rate and mass measurements, as explained below.
After the calculation of the total 2 at the nth point in the Markov chain, a new point is determined by throwing a random number according to a probability density called proposal density. We use Gaussian proposal densities, where for each parameter the mean is given by the current parameter value and the width is regularly adjusted as discussed below.
The 2 for the (n + 1)th point is then calculated and com
pared to the 2 for the nth point. If the new point shows better or equal agreement between predictions and measurements,
2n+1 2n, (2) it is accepted. If the (n + 1)th point shows worse agreement
between the predictions and measurements, it is accepted with probability
= exp
[parenrightBigg] , (3)
and rejected with probability 1 . If the (n + 1)th point
is rejected, new parameter values are generated based on the
1 Since the allowed region for all observable sets tested in Sect. 4 differ only marginally, it does not matter signicantly which observable set is chosen for the initial scan, as long as it efciently samples the relevant parameter space.
[parenleftBigg] 2n+1 2n2
123
96 Page 4 of 22 Eur. Phys. J. C (2016) 76:96
nth point again. If the (n+1)th point is accepted, new param
eter values are generated based on the (n + 1)th point. Since
the primary goal of using the MCMC method is the accurate determination of the best t point and a high sampling density around that point in the region of 2 6, while
allowing the MCMC method to escape from local minima in the 2 landscape, it is mandatory to neglect rejected points in the progression of the Markov chain. However, the rejected points may well be used in the frequentist interpretation of the Markov chain and for the determination of the p value. Thus, we store them as well in order to increase the overall sampling density.
An automated optimisation procedure was employed to determine the width of the Gaussian proposal densities for each parameter for different targets of the acceptance rate of proposed points. Since the frequentist interpretation of the Markov chain does not make direct use of the point density, we can employ chains, where the proposal densities vary during their evolution and in different regions of the parameter space. We update the widths of the proposal densities based on the variance of the last O(500) accepted points in the
Markov chain. Also, different ratios of proposal densities to the variance of accepted points are used for chains started in different parts of the parameter space, to optimally scan the widely different topologies of the 2 surface at different SUSY mass scales. These differences stem from the varying degree of correlations between different parameters required to stay in agreement with the data, and from non-linearities between the parameters and observables. They are also the main reason for the excessive amount of points needed for a typical SUSY scan, as compared to more nicely behaved parameter spaces. It has been ensured that a sufcient number of statistically independent chains yield similar scan results over the full parameter space. For the nal interpretation, all statistically independent chains are added together.
A total of 850 million valid points have been tested. The point with the lowest overall 2 = 2min is identied as the
best t point.
2.1.2 Interpretation of Markov chain results
In addition to the determination of the best t point it is also of interest to set limits in the cMSSM parameter space. For the frequentist interpretation the measure
2 = 2 2min (4) is used to determine the regions of the parameter space which are excluded at various condence levels. For this study the one dimensional 1 region ( 2 < 1) and the two dimensional 2 region ( 2 < 6) are used. In a Gaussian model, where all observables depend linearly on all parameters and where all uncertainties are Gaussian, this would correspond to the 1-dimensional 68 % and 2-dimensional 95 % con-
dence level (CL) regions. The level of observed deviation from this pure Gaussian approximation shall be discussed together with the results of the toy ts, which are an ideal tool to resolve these differences.
2.2 Determining the p value
In all previous instances of SUSY ts, no true frequentist p value for the t is calculated. Instead, usually the 2min/ndf is calculated, from which for a linear model with Gaussian observables a p value can easily be derived. It has been observed that the 2min/ndf of constrained SUSY model ts such as the cMSSM have been degrading while the direct limits on the sparticle mass scales from the LHC got stronger (see e.g. [3840]). Thus, there is the widespread opinion that the cMSSM is obsolete. However, as the cMSSM is a highly non-linear model and the observable set includes non-Gaussian observables, such as one-sided limits and the ATLAS 0-lepton search, it is not obvious that the Gaussian 2-distribution for ndf degrees of freedom can be used to calculate an accurate p value for this model. Hence the main question in this paper is: What is the condence level of the statistical exclusion of the cMSSM, exactly? To answer this, a machinery to re-interpret the scan described above had to be developed, since re-scanning the parameter space for each individual toy observable set is computationally prohibitive at present. Because during this re-interpretation of the original scan a multitude of different cMSSM points might be chosen as optima of the toy ts, such a procedure sets high demands on the scan density also over the entire approximate 23 sigma region around the observed optimum.
2.2.1 General procedure
After determining the parameter values that provide the best combined description of the observables suitable to constrain the model, the question of the p value for that model remains: Under the assumption that the tested model at the best t point is the true description of nature, what is the probability p to get a minimum 2 as bad as, or worse than, the actual minimum 2?
For a set of observables with Gaussian uncertainties, this probability is calculated by means of the 2-distribution and is given by the regularised Gamma function, p =
P [parenleftbigg]n2 ,
2 [parenrightbigg]. Here, n is the number of degrees of freedom
of the t, which equals the number of observables minus the number of free parameters of the model.
In some cases, however, this function does not describe the true distribution of the 2. Reasons for a deviation include non-linear relations between parameters and observables (as evident in the cMSSM, where a strong variation of the observables with the parameters at low parameter scales
123
Eur. Phys. J. C (2016) 76:96 Page 5 of 22 96
is observed, while complete decoupling of the observables from the parameters occurs at high scales), non-Gaussian uncertainties as well as one-sided constraints, that in addition might constrain the model only very weakly. Also, counting the number of relevant observables n might be non-trivial: for instance, after the discovery of the Higgs boson at the LHC, the limits on different Higgs masses set by the LEP experiments are expected to contribute only very weakly (if at all) to the total 2 in a t of the cMSSM. This is because the measurements at the LHC indicate that the lightest Higgs boson has a mass signicantly higher than the lower mass limit set by LEP. In such a situation, it is not clear how much such a one-sided limit actually is expected to contribute to the distribution of 2 values.
For the above reasons, the accurate determination of the p values for the ts presented in this paper requires the consideration of pseudo experiments or toy observable sets. Under the assumption that a particular best t point provides an accurate description of nature, pseudo measurements are generated for each observable. Each pseudo measurement is based on the best t prediction for the respective observable, taking into account both the absolute uncertainty at the best t point, as well as the shape of the underlying probability density function. For one unique set of pseudo measurements, the t is repeated, and a new best t point is determined with a new minimum 2BF,i.
This procedure is repeated ntoy times, and the number np of ts using pseudo measurements with 2BF,i 2BF is
determined. The p value is then given by the fraction
p =
np
(a) For a Gaussian observable with best t prediction O BFi and an absolute uncertainty BFi at the best t point, pseudo measurements have been generated by throwing a random number according to the probability density function
P(Otoyi) =
1 2 BF
i
[parenrightBigg] . (8)
(b) For the measurements of the Higgs signal strengths and the Higgs mass, the smearing has been performed by means of the covariance matrix at the best t point. The covariance matrix is obtained from [85].
(c) For the ATLAS 0-Lepton search [86] (see Sect. 3.1), the number of observed events has been smeared according to a Poisson distribution. The expectation value of the Poisson distribution has been generated for each toy by taking into account the nominal value and the systematic uncertainty on both the background and signal expectation at the best t point. The systematic uncertainties are assumed to be Gaussian.
(d) The best t point for each set of observables features a lightest Higgs boson with a mass well above the LEP limit. Assuming the best t point, the number of expected Higgs events in the LEP searches is therefore negligible and has been ignored. For this reason, the LEP limit has been smeared directly assuming a Gaussian probability density function.
2.2.3 Rerunning the t
Due to the enormous amount of CPU time needed to accurately sample the parameter space of the cMSSM and calculate a set of predictions at each point, a complete resampling for each set of pseudo measurements is prohibitive.
For this reason the pseudo ts have been performed using only the points included in the original Markov chain, for which all necessary predictions have been calculated in the original scan.
In addition, an upper cut on the 2 (calculated with respect to the real measurements) of 2 15 has been applied to
further reduce CPU time consumption. The cut is motivated by the fact, that in order to nd a toy best t point that far from the original minimum, the outcome of the pseudo measurements would have to be extremely unlikely. While this may potentially prevent a pseudo t from nding the true minimum, tests with completely Gaussian toy models have shown that the resulting 2 distributions perfectly match the expected 2 distribution for all tested numbers of degrees of freedom.
As will be shown in Sect. 4.3, in general we observe a trend towards less pseudo data ts with high 2 values in the upper
exp
[parenleftBigg] (Otoyi O BFi)2
2 BFi2
ntoy . (5)
This procedure requires a considerable amount of CPU time; the number of sets of pseudo measurements is thus limited and the resulting p value is subject to a statistical uncertainty. Given the true p value,
p = lim
ntoy
p, (6)
np varies according to a binomial distribution B(np|p,
ntoy), which in a rough approximation gives an uncertainty
of
p =
[radicalBigg] p (1 p)
ntoy (7)
on the p value.
2.2.2 Generation of pseudo measurements for the cMSSM
In the present t of the cMSSM a few different classes of observables have been used and the pseudo experiments have been generated accordingly. In this work we distinguish different smearing procedures for the observables:
123
96 Page 6 of 22 Eur. Phys. J. C (2016) 76:96
tail of the distribution than expected from the naive gaussian case. This further justies that the 2 15 cannot be
expected to bias the full result of the pseudo data ts.
Nevertheless, the p value calculated using the described procedure may be regarded as conservative in the sense that the true p value may very well be even lower. Hence, if it is found below a certain threshold of e.g. 5 %, it is not expected that there is a bias that the true p value for innite statistics is found at larger values. If for a particular toy t the true best t point is not included in the original Markov chain, the minimum 2 for that pseudo t will be larger than the true minimum for that pseudo t, which articially increases the p value.
3 Observables
The parameters of the cMSSM are constrained by precision observables, like (g2), astrophysical observations includ
ing in particular direct dark matter detection limits and the dark matter relic density, by collider searches for supersym-metric particles and by the properties of the Higgs boson. In this section we describe the observables that enter our ts. The measurements are given in Sect. 3.1 while the codes used to obtain the corresponding model predictions are described in Sect. 3.2.
3.1 Measurements and exclusion limits
We employ the same set of precision observables as in our previous analysis Ref. [40], but with updated measurements as listed in Table 1. They include the anomalous magnetic moment of the muon (g 2) a, the effective weak mix
ing angle sin2 eff, the masses of the top quark and W boson, the Bs oscillation frequency ms, as well as the branching ratios B(Bs ), B(B ), and B(b s ). The
Standard Model parameters that have been xed are collected in Table 2. Note that the top quark mass mt is used both as an observable, as well as a oating parameter in the t, since it has a signicant correlation especially with the light Higgs boson mass.
Dark matter is provided by the lightest supersymmetric particle, which we require to be solely made up of the neutralino. We use the dark matter relic density h2 =
0.11870.0017 as obtained by the Planck collaboration [95]
and bounds on the spin-independent WIMP-nucleon scattering cross section as measured by the LUX experiment [96].
Supersymmetric particles have been searched for at the LHC in a plethora of nal states. In the cMSSM parameter region preferred by the precision observables listed in Table 1, the LHC jets plus missing transverse momentum searches provide the strongest constraints. We thus implement the ATLAS analysis of Ref.[86] in our t, as described
Table 1 Precision observables used in the t
a aSM (28.7 8.0) 1010 [87,88] sin2 eff 0.23113 0.00021 [89]
mt (173.34 0.27 0.71)GeV [90]
mW (80.385 0.015)GeV [91] ms (17.719 0.036 0.023) ps1 [92]
B(Bs ) (2.90 0.70) 109 [93] B(b s ) (3.43 0.21 0.07) 104 [94]
B(B ) (1.05 0.25) 104 [92]
Table 2 Standard Model parameters that have been xed. Please note that mb and mc are MS masses at their respective mass scale, while for all other particles on-shell masses are used
1/em 128.952 [88] GF (1.1663787 105) GeV2 [92]
s 0.1184 [92] mZ 91.1876 GeV [92] mb 4.18 GeV [92] m 1.77682 GeV [92] mc 1.275 GeV [92]
in some detail in [40]. Furthermore we enforce the LEP bound on the chargino mass, m
1 > 103.5GeV[97]. The con
straints from Higgs searches at LEP are incorporated through the 2 extension provided by HiggsBounds 4.1.1 [98 101], which also provides limits on additional heavier Higgs bosons. The signal rate and mass measurements of the experimentally established Higgs boson at 125 GeV are included using the program HiggsSignals 1.2.0 [85] (see also Ref. [102] and references therein). HiggsSignals is a general tool which allows the test of any model with Higgs-like particles against the measurements at the LHC and the Tevatron. Therefore, its default observable set of Higgs rate measurements is very extensive. As is discussed in detail in Sect. 4.3, this provides maximal exibility and sensitivity on the constraints of the allowed parameter ranges, but is not necessarily ideally tailored for goodness-of-t tests. There, it is important to combine observables which the model on theoretical grounds cannot vary independently. In order to take this effect into account, in our analysis we compare ve different Higgs observable sets:
Set 1 (large observable set) This set is the default observable set provided with HiggsSignals 1.2.0, containing in total 80 signal rate measurements obtained from the LHC and Tevatron experiments. It contains all available subchannel/category measurements in the various Higgs decay modes investigated by the experiments. Hence, while this set is most appropriate for resolving potential deviations in the
123
Eur. Phys. J. C (2016) 76:96 Page 7 of 22 96
Table 3 Higgs boson mass and rate observables of Set 2 (medium observable set)
Experiment, channel Observed Observed mh
ATLAS, h W W [104] 0.99+0.310.28
ATLAS, h Z Z 4 [104] 1.43+0.400.35 (124.3 1.1)GeV
ATLAS, h [104] 1.55+0.330.28 (126.8 0.9)GeV
ATLAS, h [107] 1.44+0.510.43
ATLAS, V h V (bb) [108] 0.17+0.670.63
CMS, h W W [109] 0.72+0.200.18
CMS, h Z Z 4 [105] 0.93+0.290.25 (125.6 0.6)GeV
CMS, h [106] 0.77+0.300.27 (125.4 1.1)GeV
CMS, h [110] 0.78+0.270.27
CMS, V h V (bb) [110] 1.00+0.500.50
Higgs boson coupling structure, it comes with a high level of redundancy. Detailed information on the signal rate observables in this set can be found in Ref. [102]. Furthermore, the set contains four mass measurements, performed by ATLAS and CMS in the h and h Z Z() 4 channels.
It is used as a cross-check for the derived observable sets described below.
Set 2 (medium observable set) This set contains ten inclusive rate measurements, performed in the channels h ,
h Z Z, h W W, V h V (bb) (V = W, Z), and
h by ATLAS and CMS, listed in Table 3. As in Set 1,
four Higgs mass measurements are included.
Set 3 (small observable set) In this set, the h ,
h Z Z and h W W channels are combined to
a measurement of a universal signal rate, denoted h
, Z Z, W W in the following. Together with the V h
V (bb) and h from Set 2, we have in total six rate mea
surements. Furthermore, in each LHC experiment the Higgs mass measurements are combined. The observables are listed in Table 4.
Table 4 Higgs boson mass and rate observables of Set 3 (small observable set)
Experiment, channel Observed Observed mh
ATLAS, h W W, Z Z, [104] 1.33+0.210.18 (125.5 0.8)GeV
ATLAS, h [107] 1.44+0.510.43
ATLAS, V h V (bb) [108] 0.17+0.670.63
CMS, h W W, Z Z, a 0.80+0.160.15 (125.7 0.6)GeV
CMS, h [110] 0.78+0.270.27
CMS, V h V (bb) [110] 1.00+0.500.50
a The combination of the CMS h W W, Z Z, channels has been
performed with HiggsSignals using results from Refs. [105,109,111].
The combined mass measurement for CMS is taken from Ref. [106]
Table 5 Higgs boson mass and rate observables of Set 4 (combined observable set)
Experiment, channel Observed Observed mh
ATLAS+CMS, h W W, Z Z 0.94+0.170.16 (125.73 0.45)GeV
ATLAS+CMS, h 1.16+0.220.20 ATLAS+CMS, h 1.11+0.240.23
ATLAS+CMS, V h, tth bb 0.69+0.370.37
Set 4 (combined observable set) In this set we further reduce the number of Higgs observables by combining the ATLAS and CMS measurements for the Higgs decays to electroweak vector bosons (V = W, Z), photons, b-quarks and -leptons.
These combinations are performed by tting a universal rate scale factor to the relevant observables from within Set 1. Furthermore, we perform a combined t to the Higgs mass observables of Set 1, yielding mh = (125.73 0.45)GeV.2
The observables of this set are listed in Table 5.
Set 5 (Higgs mass only) Here, we do not use any Higgs signal rate measurements. We only use one combined Higgs mass observable, which in our case is mh = (125.73 0.45)GeV
(see above).
3.2 Model predictions
We use the following public codes to calculate the predictions for the relevant observables: The spectrum is calculated with SPheno 3.2.4 [112,113]. First the two-loop RGEs [114] are used to obtain the parameters at the scale Q = mt1mt2 . At
this scale the complete one-loop corrections to all masses of supersymmetric particles are calculated to obtain the on-shell masses from the underlying DR parameters [115]. A measure of the theory uncertainty due to the missing higher-order corrections is given by varying the scale Q between Q/2 and 2Q. We nd that the uncertainty on the strongly interacting particles is about 12 %, whereas for the electroweakly interacting particles it is of order a few per mille [113].
Properties of the Higgs bosons as well as a, ms, sin2 eff and mW are obtained with FeynHiggs 2.10.1 [116], which compared to FeynHiggs 2.9.5 and earlier versions contains a signicantly improved calculation of the Higgs boson mass [117] for the case of a heavy SUSY spectrum.
2 Note that the computing time needed for creating the pseudo-data ts presented in Sect. 4.3 means that the ts were starting to be performed signicantly before the combined measurement of the Higgs boson mass mhcomb = 125.09 0.21GeV by the ATLAS and CMS collaborations
was published [103]. We therefore performed our own combination, based on earlier results as published in [104106]. Given the applied theory uncertainty on the Higgs mass prediction of mhtheo = 3GeV,
a shift of 0.64GeV in the Higgs boson mass has a very small effect of 2 O(0.642/32) = 0.046, which is negligible in terms of the
overall conclusions in this paper.
123
96 Page 8 of 22 Eur. Phys. J. C (2016) 76:96
This improves the theoretical uncertainty on the Higgs mass calculation from about 34 GeV in cMSSM scenarios [118 120] to about 2GeV [48].
The B-physics branching ratios are calculated by SuperIso 3.3 [121]. We have checked that the results for the observables, that are also implemented in SPheno agree within the theoretical uncertainties, see also [122] for a comparison with other codes.
For the astrophysical observables we use MicrOMEGAs3.6.9 [123,124] to calculate the dark matter relic density and DarkSUSY 5.0.5 [125] via the interface program AstroFit [126] for the direct detection cross section.
For the calculation of the expected number of signal events in the ATLAS jets plus missing transverse momentum search, we use the Monte Carlo event generator Herwig++ [127] and a carefully tuned and validated version of the fast parametric detector simulation Delphes [128]. For tan = 10
and A0 = 0, a ne grid has been produced in M0 and M1/2.
In addition, several smaller, coarse grids have been dened in A0 and tan for xed values of M0 and M1/2 along the expected ATLAS exclusion curve to correct the signal expectation for arbitrary values of A0 and tan . We assume a systematic uncertainty of 10 % on the expected number of signal events. In Fig. 1 we compare the expected and observed limit as published by the ATLAS collaboration to the result of our emulation. The gure shows that the procedure works reasonably well and is able to reproduce with sufcient precision the expected exclusion curve, including the 1
variations.
We reweight the events depending on their production channel according to NLO cross sections obtained from Prospino [129131]. Renormalisation and factorisation scales have been chosen such that the NLO+NLL
resummed cross section normalisations [132136] are reproduced for squark and gluino production.
Fig. 1 Comparison of the emulation of the ATLAS 0-Lepton search with the published ATLAS result. In red dots we show the ATLAS median expected limit; the red lines denote the corresponding 1 uncertainty. The central black line is the result of the Fittino implementation described in the text. The upper and lower black curves are the corresponding 1 uncertainty. The yellow dots are the observed ATLAS limit
Table 6 Theoretical uncertainties of the precision observables used in the t
a aSM 7 % sin2 eff 0.05 %
mt 1GeV mW 0.01 %
ms 24 % B(Bs ) 26 %
B(b s ) 14 % B(B ) 20 %
For all predictions we take theoretical uncertainties into account, most of which are parameter dependent and reevaluated at every point in the MCMC. For the precision observables, they are given in Table 6. For the dark matter relic density we assume a theoretical uncertainty of 10 %, for the neutralino-nucleon cross section entering the direct detection limits we assign 50 % uncertainty (see Ref. [40] for a discussion of this uncertainty arinsing from pion-nuclueon form factors), for the Higgs boson mass prediction 2.4 %, and for Higgs rates we use the uncertainties given by the LHC Higgs Cross Section Working Group [137].
One common challenge for computing codes specically developed for SUSY predictions is that they might not always exactly predict the most precise predictions of the SM value in the decoupling limit. The reason is that specic loop corrections or renormalisation conventions are not always numerically implemented in the same way, or that SM loop contributions might be missing from the SUSY calculation. In most cases these differences are within the theory uncertainty, or can be used to estimate those. One such case of interest for this t occurs in the program FeynHiggs, which does not exactly reproduce the SM Higgs decoupling limit [138] as used by the LHC Higgs cross-section working group [137]. To compensate this, we rescale the Higgs production cross sections and partial widths of the SM-like Higgs boson. We determine the scaling factors by the following procedure [138]: we x tan = 10. We set all mass parameters of
the MSSM (including the parameters and m A of the Higgs sector) to a common value mSUSY. We require all sfermion mixing parameters A f to be equal. We vary them by varying the ratio Xt/mSUSY, where Xt = At / tan . The mass
of the Higgs boson becomes maximal for values of this ratio of about 2. We scan the ratio between these values. In this
way we nd for each mSUSY two parameter points which give a Higgs boson mass of about 125.5GeV. One of these has negative Xt, the other positive Xt. We then determine the scaling factor by requiring that for mSUSY = 4TeV and
negative Xt the production cross sections and partial widths of the SM-like Higgs boson are the same as for a SM Higgs boson with the same mass of 125.5GeV. We then determine
123
Eur. Phys. J. C (2016) 76:96 Page 9 of 22 96
the uncertainty on this scale factor by comparing the result with scale factors which we would have gotten by choosing mSUSY = 3TeV, mSUSY = 5TeV or a positive sign of
Xt. This additional uncertainty is taking into account in the 2 computation. By this procedure we derive scale factors between 0.95 and 1.23 with uncertainties of less than 0.6 %.
4 Results
In Sect. 4.1, we show results based on the simplistic and common prole likelihood technique, which all frequentist ts, including us, have hitherto been employing. In Sect. 4.2 a full scan of the allowed parameter space for a stable vacuum is shown, before moving on to novel results from toy ts in Sect. 4.3.
4.1 Prole likelihood based results
In this section we describe the preferred parameter space region of the cMSSM and its physical properties. Since a truly complete frequentist determination of a condence region would require not only to perform toy ts around the best t point (as described in Sects. 2.2 and 4.3) but around every cMSSM parameter point in the scan, we rely here on the prole likelihood technique. This means, we show various projections of the 1D-1/1D-2/2D-2 regions dened as regions which satisfy 2 < 1/4/5.99 respectively. In this context, prole likelihood means that out of the ve physical parameters in the scan, the parameters not shown in a plot are proled, i.e. a scan over the hidden dimensions is performed and for each selected visible parameter point the lowest 2 value in the hidden dimensions is chosen. Obviously, no systematic nuisance parameters are involved, since all systematic uncertainties are given by relative or absolute Gaussian uncertainties, as discussed in Sect. 2. One should keep in mind that this correspondence is actually only exact when the observed distribution of 2 values in a set of toy ts is truly 2-distributed, which as discussed in Sect. 4.3 is not the case. Nevertheless, since the exact method is not computationally feasible, this standard method, as used in the literature in all previous frequentist results, gives a reasonable estimation of the allowed parameter space. In Sect.4.3 more comparisons between the sets of toy t results and the prole likelihood result will be discussed.
Note that for the discussion in this and the next section, we treat the region around the best t point as allowed, even though, depending on the observable set, an exclusion of the complete model will be discussed in Sect. 4.3.
All ve Higgs input parameterisations introduced in Sect. 3 lead to very similar results when interpreted with the prole likelihood technique. As an example, Fig. 2ac show the (M0, M1/2)-projection of the best t point, 1D-1 and 2D-
2 regions for the small, the large and the medium observable set. Thus, in the remainder of this section, we concentrate on results from the medium observable set.
The (M0, M1/2)-projection in Fig. 2b shows two disjoint regions. While in the region of the global 2 minimum, values of less than 900GeV for M0 and less than 1300GeV for
M1/2 are preferred at 1, in the region of the secondary minimum values of more than 7900GeV for M0 and more than 2100GeV for M1/2 are favored (see also Table 7).
The different regions are characterised by different dominant dark matter annihilation mechanisms as shown in Fig. 3. Here we dene the different regions similarly to Ref. [50] by the following kinematical conditions, which we have adapted such that each point of the 2D-2 region belongs to at least one region:
1 coannihilation: m
1/m
01 1 < 0.15 t1 coannihilation: mt1/m
01 1 < 0.2
01 1 < 0.1 A/H funnel: |m A/2m
01 1| < 0.2 focus point region: |/m
01 1| < 0.4.
With these denitions each parameter point of the preferred 2D-2 region belongs either to the
1 coannihilation or the focus point region. Additionally a subset of the points in the
1 coannihilation region fullls the criterion for the A/H-funnel, while some points of the focus point region fulll the criterion for 1 coannihilation. No point in the preferred 2D-2 region fullls the criterion for t1 coannihi
lation, due to relatively large stop masses.
At large M0 and low M1/2 a thin strip of our preferred 2D-2 region is excluded at 95 % condence level by ATLAS jets plus missing transverse momentum searches requiring exactly one lepton [139] or at most one lepton and b-jets [140] in the nal state. Therefore an inclusion of these results in the t is expected to remove this small part of the focus point region without changing any conclusion.
Also note that the parameter space for values of M0 larger than 10 TeV was not scanned such that the preferred 2D-2 focus point region is cut off at this value. Because the decoupling limit has already been reached at these large mass scales we do not expect signicant changes in the predicted observable values when going to larger values of M0. Hence we also expect the 1D-1 region to extend to larger values of M0 than visible in Fig. 2b due to a low sampling density directly at the 10 TeV boundary. For the same reason this cut is not expected to inuence the result of the p value calculation. If it does it would only lead to an overestimation of the p value.
In the
1 coannihilation: m
1/m
1 coannihilation region negative values of A0 between 4000 and 1400 GeV and moderate values of
tan between 6 and 35 are preferred, while in the focus point
123
96 Page 10 of 22 Eur. Phys. J. C (2016) 76:96
2D 95% CL
1D 68% CL
best fit
2D 95% CL
1D 68% CL
best fit
5000
4500
5000
4500
4000
4000
3500
3500
(GeV)
1/2
M
(GeV)
1/2
M
3000
3000
2500
2500
2000
2000
1500
1500
1000
1000
500
500
0 2000 4000 6000 8000 10000
0 2000 4000 6000 8000 10000
M
(GeV)
M
(GeV)
0
0
(a) (b)
2D 95% CL
1D 68% CL
best fit
2D 95% CL
1D 68% CL
best fit
5000
4500
90
80
4000
70
3500
60
(GeV)
1/2
M
3000
50
tan
2500
40
2000
30
1500
20
1000
10
500
0 2000 4000 6000 8000 10000
0 -10000 -5000 0 5000 10000
M
(GeV)
A
(GeV)
0
0
(c) (d)
Fig. 2 1 and 2 contour plots for different projections and different observable sets. It can be seen that the preferred parameter region does not depend on the specic observable set
Table 7 Central values and 1 uncertainties of the free model parameters at the global and secondary minimum when using the medium observable set
Parameter Global minimum Secondary minimum
M0 (GeV) 387.4+481.7151.2 8983.4+990.61039.6 M1/2 (GeV) 918.2+297.759.3 2701.1+582.6560.5
A0 (GeV) 2002.8+541.51992.9 5319.0+2339.81357.9 tan 17.7+16.810.8 43.2+5.56.6mt (GeV) 174.3+1.11.1 172.1+0.60.6
region large positive values of A0 above 3400GeV and large values of tan above 36 are favored. This can be seen in the (A0, tan )-projection shown in Fig. 2d and in Table 7.
While the
1 coannihilation region predicts a spin independent dark matter-nucleon scattering cross section which is well below the limit set by the LUX experiment, this mea-
surement has a signicant impact on parts of the focus point region for lightest supersymmetric particle (LSP) masses between 200 GeV and 1 TeV, as shown in Fig. 4. The plot also shows how the additional uncertainty of 50 % on SI shifts the implemented limit compared to the original limit set by LUX. It can be seen that future improvements by about 2 orders of magnitude in the sensitivity of direct detection experiments, as envisaged e.g. for the future of the XENON 1T experiment [141], could at least signicantly reduce the remaining allowed parameter space even taking the systematic uncertainty into account, or nally discover SUSY dark matter.
The predicted mass spectrum of the Higgs bosons and supersymmetric particles at the best t point and in the one-dimensional 1 and 2 regions is shown in Fig. 5. Due to the relatively shallow minima of the t a wide ranges of masses is allowed at 2 for most of the particles. The masses of the coloured superpartners are predicted to lie above 1.5 TeV, but due to the focus point region also masses above 10 TeV
123
Eur. Phys. J. C (2016) 76:96 Page 11 of 22 96
5000
SUSY FITTINO
LSUSY
SUSY
(GeV)
4500
Focus Point region
-Coannihilation region
-Coannihilation region
A-Funnel region region
1D 1
68 % CL 95 % CL
best fit value data
1/2 4000
3500
3000
ATL, h
ZZ
4l
2500
2000
ATL, h
1500
1000
500
CMS, h
ZZ
4l
0 2000 4000 6000 8000 10000
M
(GeV)
0
Fig. 3 2 region in the (M0, M1/2)-plane for the medium observable set. Regions with different dark matter annihilation mechanisms are indicated. The enclosed red areas denote the best t regions shown in Fig. 2b
= 1.64
(GeV)
CMS, h
118 120 122 124 126 128
130
m
(GeV)
(pb)
10
h
Fig. 6 Our predicted mass of the light Higgs boson, together with the 1 and 2 ranges. The LHC measurements used in the t are shown as well. Note that the correlated theory uncertainty of mhtheo = 3GeV
is not shown in the plot. The relative smallness of the 68 % CL region of the tted mass of mh f it = 1.1GeV is caused by constraints from
other observables
10
10
10
68 % CL
95 % CL
SUSY FITTINO
LSUSY
SUSY
10
best fit value
10 10 10
m
ggh
Fig. 4 1D1 and 2D2 regions in m
01 -SI for the medium observable set. The LUX exclusion is shown in addition. The dashed line indicates a 2 contribution of 1.64 which corresponds to a 90 % CL upper limit. This line does not match the LUX exclusion line because we use a theory uncertainty of 50 % on SI
qqh
Wh
12000
environment
1
environment
2best fit value
Zh
10000
particle mass (GeV)
8000
bbh
6000
4000
tth
2000
0.9 0.95 1
1.05
R
l l q~ L
1 q~ 1
b~ 2
b~ 1
t~ 2
t~
L
/
SM @ 14 TeV
Fig. 5 The Higgs and supersymmetric particle mass spectrum as predicted by our t using the medium set of Higgs observables
are allowed at 1. Similarly the heavy Higgs bosons have masses of about 1.5 TeV at the best t point, but masses of about 6TeV are preferred in the focus point region. The sleptons, neutralinos and charginos on the other hand can still have masses of a few hundred GeV.
Fig. 7 Predicted production cross sections at 14TeV of the light Higgs boson relative to the SM value for a Higgs boson with the same mass
A lightest Higgs boson with a mass as measured by the ATLAS and CMS collaborations can easily be accommodated, as shown in Fig. 6. As required by the signal strength measurements, it is predicted to be SM-like. Figure 7 shows
123
96 Page 12 of 22 Eur. Phys. J. C (2016) 76:96
68 % CL
95 % CL
best fit value
data
SUSY FITTINO
LSUSY
SUSY
ATL, h
WW
l
l
ATL, h
ZZ
4l
ATL, h
ATL, h
ATL, Vh
CMS, h
Vbb
WW
l
l
CMS, h
ZZ
4l
CMS, h
CMS, h
CMS, Vh
Vbb
0 1
2
Fig. 8 Our predicted values of the light Higgs boson relative to the SM value for a Higgs boson with the same mass. The measurements used in the t are shown as well
a comparison of the Higgs production cross sections for different production mechanisms in p-p collisions at a centreof-mass energy of 14TeV. These contain gluon-fusion (ggh), vector boson fusion (qqh), associated production (Wh, Zh), and production in assiciation with heavy quark avours (bbh, tth). Compared to the SM prediction, the cMSSM predicts a slightly smaller cross section in all channels except the bbh channel. The predicted signal strengths in the different nal states for p-p collisions at a centre-of-mass energy of 8 TeV is also slightly smaller than the SM prediction, as shown in Fig. 8. The current precision of these measurements does, however, not allow for a discrimination between the SM and the cMSSM based solely on measurements of Higgs boson properties.
4.2 Vacuum stability
The scalar sector of the SM consists of just one complex Higgs doublet. In the cMSSM the scalar sector is dramatically expanded with an extra complex Higgs doublet, as well as the sfermions eL,R,
L,L,R, dL,R of the rst fam
ily, and correspondingly of the second and third families. Thus there are 25 complex scalar elds. The corresponding complete scalar potential VcMSSM is xed by the ve parameters: (M0, M1/2, A0, tan , sgn()). The minimal potential energy of the vacuum is obtained for constant scalar eld val-
ues everywhere. Given a xed set of these cMSSM parameters, it is a computational question to determine the minimum of VcMSSM . Ideally this minimum should lead to a Higgs vacuum expectation such that SU(2)LU(1)Y U(1)EM, as in
the SM. However, it was observed early on in supersymmetric model building, that due to the extended scalar sector, some sfermions could obtain non-vanishing vacuum expectation values (vevs). There could be additional minima of the scalar potential which would break SU(3)c and/or U(1)EM and thus colour and/or charge [7,142144]. If these minima are energetically higher than the conventional electroweak breaking minimum, then the latter is considered stable. If any of these minima are lower than the conventional minimum, our universe could tunnel into them. If the tunneling time is longer than the age of the Universe of 13.8gigayears [95], we denote our favored vacuum as metastable, otherwise it is unstable. However, this is only a rough categorisation. Since even if the tunneling time is shorter than the age of the universe, there is a nite probability, that it will have survived until today. When computing this probability, we set a limit of 10 % survival probability. We wish to explore here the vacuum stability of the preferred parameter ranges of our ts.
The recent observation of the large Higgs boson mass requires within the cMSSM large stop masses and/or a large stop mass splitting. Together with the tuning of the lighter stau mass to favor the stau co-annihilation region (for the low M0 t region), this typically drives ts to favor a very large value of |A0| relative to |M0|, cf. Table 7. (For alterna
tive non-cMSSM models with a modied stop sector, see for example [145148].) This is exactly the region, which typically suffers from the SM-like vacuum being only metastable, decaying to a charge- and/or colour-breaking (CCB) minimum of the potential [149151].
For the purpose of a t, in principle a likelihood value for the compatibility of the lifetime of the SM-like vacuum of a particular parameter point with the observation of the age of the Universe should be calculated and should be implemented as a one-sided limit. Unfortunately, the effort required to compute all the minima of the full scalar potential and to compute the decay rates for every point in the MCMC and to implement this in the likelihood function is beyond present capabilities [149].
Effectively, whether or not a parameter point has an unacceptably short lifetime has a binary yes/no answer. Therefore, as a rst step, and in the light of the results of the possible exclusion of the cMSSM in Sect. 4.3, we overlay our t result from Sect. 4.1 over a scan of the lifetime of the cMSSM vacuum over the complete parameter space.
The systematic analysis of whether a potential has minima which are deeper than the desired vacuum conguration has been automated in the program Vevacious [152]. When restricting the analysis to only a most likely relevant subset
123
Eur. Phys. J. C (2016) 76:96 Page 13 of 22 96
SUSY FITTINO
LSUSY
SUSY
5000
(GeV) M
2D 95% CL, stable1D 68% CL, stable2D 95% CL, metastable 1D 68% CL, metastable
4500
4000
3500
3000
2500
2000
1500
1000
500
0 0 2000 4000 6000 8000 10000
M
(GeV)
Fig. 9 Preferred 1D-1 and 2D-2 regions in M0-M1/2 for the medium observable set. The lled areas contain stable points, while the doted lines enclose points which are metastable but still might be very long-lived. The whole preferred 2D-2 focus point region leads to a stable vacuum, while the coannihilation region contains both stable and metastable points. There are no stable points in the preferred 1D-1 coannihilation region
co-annihilation region of the cMSSM investigated in [149] had SM-like vacuum lifetimes, which were all acceptably long compared to the observed age of the Universe.
The 1D1 best-t points in Sect. 4.1 where checked for undesired minima, allowing, but not requiring, simultaneously for all the following scalar elds to have non-zero, real VEVs: H0d, H0u,
L,
R,
SUSY FITTINO
LSUSY
SUSY
/ M
A
5
0
-10
L, bL, bR, tL, tR. The focus point region best-t point was found to have an absolutely stable SM-like minimum against tunneling to other minima, as no deeper minimum of VcMSSM was found at the 1-loop level. The SM-like vacuum of the best-t
-5
1D 68% CL, metastable
1 co-annihilation point was found to be metastable, with a deep CCB minimum with non-zero stau and stop VEVs. Furthermore there were unbounded-from-below directions with non-zero values for the -sneutrino scalar eld in combination with nonzero values for both staus, or both sbottoms, or both chiralities of both staus and sbottoms. This does not bode very well for the absolute best-t point of our cMSSM t. However, further effects must be considered.
The parameter space of the MSSM which has directions in eld space, where the tree-level potential is unbounded from below was systematically investigated in Ref. [157]. We conrmed the persistence of the runaway directions at one loop with Vevacious out to eld values of the order of twenty times the renormalisation scale. This is about the limit of trustworthiness of a xed-order, xed-renormalisation-scale calculation [152]. However, this is not quite as alarming as it may seem. The appropriate renormalisation scale for very large eld values should be of the order of the eld values themselves, and for eld values of the order of the GUT scale, the cMSSM soft SUSY-breaking mass-squared parameters by denition are positive. Thus the potential at the GUT scale is bounded from below, as none of the conditions for unbounded-from-below directions given in [157] can be satised without at least one negative mass-squared parameter.
-15
0 10 20 30 40 50 60
tan
Fig. 10 Preferred 1D-1 and 2D-2 regions in tan -A0/M0 for the medium observable set. The lled areas contain stable points, while the doted lines enclose points which are metastable but still might be very long-lived. Points leading to a metastable vacuum have usually larger negative values of A0 relative to M0 when compared to points with a stable vacuum at the same tan . The part of the 1D-1 region which belongs to the focus point region fullls A0/M0 0 and is stable, while
the part which belongs to the coannihilation region consists of points with relatively large negative values of A0/M0 and is metastable
of the scalar elds of the potential, i.e. not the full 25 complex scalar elds, and ignoring the calculation of lifetimes, this code runs sufciently fast that we are able to present an overlay of which parameter points have CCB minima deeper than the SM-like minimum in Figs. 9 and 10. However, only the stop and stau elds were allowed to have non-zero values in determining the overlays, in addition to the neutral components of the two complex scalar Higgs doublets. The
L,R, tL,R are suspected to have the largest effect [149]. The
computation time when including more scalar elds which are allowed to vary increases exponentially. Thus the more detailed investigations below are restricted to a set of benchmark points. Note that the overlays in Figs. 9 and 10 only
show whether metastable vacua might occur at a given point, or whether the vacuum is instable at all. The actual lifetime is not yet considered in this step. See the further considerations below.
There are analytical conditions in the literature for whether MSSM parameter points could have dangerously deep CCB minima, see for example [7,143,144,153157]. These can be checked numerically in a negligible amount of CPU time, while performing a t. However, these conditions have been explicitly shown to be neither necessary nor sufcient [158]. In particular they have also been shown numerically to be neither necessary nor sufcient for the relevant regions of the cMSSM parameter space which we consider here [149].
Since the exact calculation of the lifetime of a metastable SM-like vacuum is so computationally intensive, we unfortunately must restrict this to just the best-t points of the stau co-annihilation and focus point regions of our the t, as determined in Sect. 4.1. As an indicator, though, the extended
1
123
96 Page 14 of 22 Eur. Phys. J. C (2016) 76:96
Note, even the Standard Model suffers from a potential which is unbounded from below at a xed renormalisation scale. Though in the case of the SM it only appears at the one-loop level. Nevertheless, RGEs show that the SM is bounded from below at high energies [159].
Furthermore, the calculation of a tunneling time out of a false minimum does not technically require that the Universe tunnels into a deeper minimum. In fact, the state which dominates tunneling is always a vacuum bubble, with a eld conguration inside, which classically evolves to the true vacuum after quantum tunneling [160,161]. Hence the lifetime of the SM-like vacuum of the stau co-annihilation best-t point could be calculated at one loop even though the potential is unbounded from below at this level. The minimal energy barrier through which the SM-like vacuum of this point can tunnel is associated with a nal state with nonzero values for the scalar elds H0d, H0u,
L,
R, and
L.
Toy p value (%)
Small 27.1/16 4.0 1.9 0.4
Medium 30.4/22 10.8 4.9 0.7
Combined 17.5/13 17.7 8.3 0.8
Medium (focus point) 30.8/22 10.0 7.8 0.8
Medium without (g-2) 18.1/21 64.1 51 3
Observable set withoutHiggs rates
15.5/9 7.8 1.3 0.4
The lifetime was calculated by using the program Vevacious through the program CosmoTransitions [162] to be roughly e4000 101700 times the age of the Universe. There
fore, we consider the
1 coannihilation region best-t point as effectively stable.
As well as asking whether a metastable vacuum has a lifetime at least as long as the age of the Universe at zero temperature, one can also ask whether the false vacuum would survive a high-temperature period in the early Universe. Such a calculation has been incorporated into Vevacious [163]. In addition to the fact that the running of the Lagrangian parameters ensures that the potential is bounded from below at the GUT scale, the effects of non-zero temperature serve to bound the potential from below, as well. In fact the CCB minima of VcMSSM evaluated at the parameters of the stau co-annihilation best-t point are no longer deeper than the conguration with all zero VEVs, which is assumed to evolve into the SM-like minimum as the Universe cools, for temperatures over about 2300 GeV. The probability of tunneling into the CCB state integrated over temperatures from 2300 GeV down to 0 GeV was calculated to be roughly exp(e2000). So while having a non-zero-temperature decay width about e2000/e4000 = e+2000 times larger than the
zero-temperature decay width, the SM-like vacuum, or its high-temperature precursor, of the stau co-annihilation best-t point has a decay probability which is still utterly insignificant.
4.3 Toy based results
Pseudo datasets have been generated for a total of seven different minima based on six different observable sets. For the medium, small and combined observable sets, roughly 1000 sets of pseudo measurements have been taken into account, as well as for the observable set without the Higgs rates. For the medium observable set, in addition to the best t point, we
Table 8 Summary of p values
Observable set 2/ndf Naive p value (%)
also study the p value of the local minimum in the focus point region. Due to relaxed requirements on the statistical uncertainty of a p value in the range of O(0.5), as compared to (0.05), we use only 125 pseudo datasets for the large observable set. Finally, to study the importance of (g-2), a total of 500 pseudo datasets have been generated based on the best t point for the medium observable set without (g-2).
A summary of all p values with their statistical uncertainties and a comparison to the naive p value according to the 2-distribution for Gaussian distributed variables is shown in Table 8.
Figure 11a shows the 2-distribution for the best t point of the medium observable set, from which we derive a p value of (4.90.7) %. As a comparison we also show the 2-
distribution for the pseudo ts using the combined observable set in Fig. 11b. Both distributions are signicantly shifted towards smaller 2-values compared to the corresponding 2-distributions for Gaussian distributed variables. Several observables are responsible for the large deviation between the two distributions, as shown in Fig. 12a, where the individual contributions of all observables to the minimum 2 of all pseudo best t points are plotted.
First, HiggsBounds does not contribute signicantly to the 2 at any of the pseudo best-t points, which is also the case for the original t. The reason for this is, that for the majority of tested points, the 2 contribution from Higgs-Bounds reects the amount of violation of the LEP limit on the lightest Higgs boson mass by the model. Since the measurements of the Higgs mass by ATLAS and CMS lie signicantly above this limit, it is extremely unlikely that in one of the pseudo datasets the Higgs mass is rolled such that the best t point would receive a 2 penalty due to the LEP limit. This effectively eliminates one degree of freedom from the t. In addition, the predicted masses of A, H and H lie in the decoupled regime of the allowed cMSSM parameter space. Thus there are no contributions from heavy Higgs or charged Higgs searches as implemented in HiggsBounds.
The same effect is observed slightly less pronounced for the LHC and LUX limits, where the best t points are much
123
Eur. Phys. J. C (2016) 76:96 Page 15 of 22 96
0.14
0.2
fractions
0.12
fractions
0.18
0.16
0.1
0.14
0.08
0.12
0.1
0.06
0.08
0.04
0.06
0.04
0.02
0.02
0 0 10 20 30 40 50
0 0 5 10 15 20 25 30 35 40
2
2
(a) (b)
Fig. 11 Distribution of minimal 2 values from toy ts using two different sets of Higgs observables. A 2 distribution for Gaussian distributed variables is shown for comparison
0 10 20 30 40
0 10 20 30 40
full rangelocal 95% Interval local 68% Interval original best fit point
full rangelocal 95% Interval local 68% Interval original best fit point
0 5 10 15 20 25 30
0 5 10 15 20 25 30
contribution to
contribution to
(a) (b)
Fig. 12 Individual 2 contributions of all observables/observable sets at the best t points of the toy ts using the medium set of Higgs observables with observables smeared around the global and the local minimum of the observed 2 contour. The predicted measurements at the best t points of the individual pseudo data ts are used to derive
the local CL intervals shown in the plots. These are compared with the individual 2 contribution of each observable at the global or local minimum. Note the different scale shown on the top which is used for HiggsSignals, which contains 14 observables. Also note that mh contains contributions from 4 measurements for this observable set
closer to the respective limits than in the case of Higgs-Bounds. Finally we observe that for each pseudo dataset the cMSSM can very well describe the pseudo measurement of the dark matter relic density, which further reduced the effective number of degrees of freedom.
Figure 12a also shows that the level of disagreement between measurement and prediction for (g 2) is smaller
in each single pseudo dataset than in the original t with the real dataset. The 1-dimensional distribution of the pseudo best t values of (g 2) is shown in Fig. 13a. The gure
shows that under the assumption of our best t point, not a single pseudo dataset would yield a prediction of (g 2)
that is consistent with the actual measurement. As a comparison, Fig. 13b shows the 1-dimensional distribution for the
dark matter relic density, where the actual measurement can well be accommodated in any of the pseudo best t scenarios. To further study the impact of (g 2) on the p value, we
repeat the toy ts without this observable and get a p value of (51 3) %. This shows that the relatively low p value for
our baseline t is mainly due to the incompatibility of the (g 2) measurement with large sparticle masses, which
are however required by the LHC results.
Interestingly, under the assumption that the minimum in the focus point region is the true description of nature, we get a slightly better p value (7.8 %) than we get with the actual best t point. Figure 12b shows the individual contributions to the pseudo best t 2 at the pseudo best t points for the toy ts performed around the local minimum in the focus point
123
96 Page 16 of 22 Eur. Phys. J. C (2016) 76:96
0.22
fractions
0.45
cMSSM
fractions
0.2
cMSSM
toy fits
1
2best fit point
8
1
1
.
0 + 0.13- 0.11
0.4
0.35
toy fits
1
2best fit point
10
0.18
0.16
data
0.14
data
0.012
0.3
(2.87
0.83)
10
3.86
10
0.1187
+ 8.8
10
0.25
0.12
- 3.5
0.1
0.2
0.08
0.15
0.06
0.1
0.04
0.05
0.02
0 -2 -1 0 1 2 3 4
-9
10
0 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2
exp - a
a
SM
2
h
CDM
(a) (b)
Fig. 13 Distribution of the predictions of the best t points of the pseudo data ts for two different observables used in the t, compared with the respective measurements
2
140
number of pseudo best fit points
120
100
global minimum focus point minimum
80
60
-coannihilation region such that it corresponds to the point density around the local minimum in the focus point region. We nd that the resulting 2 distribution is slightly shifted with respect to the 2 distribution we get from the full MCMC. The shift is, however, too small to explain the difference between p values we nd for the global minimum and the local minimum in the focus point region.
As an additional test, we investigate a simple toy model with only Gaussian observables and a single one-sided limit corresponding to the LHC SUSY search we use in our t of the cMSSM. Also in this very simple model we nd signicantly different 2 distributions for ts based on points in a region with/without a signicant signal expectation for the counting experiment. We thus conclude that the true p value for the local minimum in the focus point region is in fact higher than the true p value for the global minimum of our t.
In order to ensure that there are no more points with a higher 2 and a higher p value than the local minimum in the focus point region, we generate 200 pseudo datasets for two more points in the focus point region. The two points are the points with the highest/lowest M0 in the local 1
region around the focus point minimum. We nd that the 2 distributions we get from these pseudo datasets are in good agreement with each other and also with the 2 distribution derived from the pseudo experiments around the focus point minimum, and hence conclude that the local minimum in the focus point region is the point with the highest p value in the cMSSM.
40
20
0 0 10 20 30 40 50
Fig. 14 Comparison of the 2 distributions obtained from toy ts using the global minimum and the local minimum in the focus point region of the medium observable set
region. There are two variables with higher average contributions compared to the global minimum: mtop and the LHC
SUSY search. In particular for the LHC SUSY search, the LHC contribution to the total 2 is, on average, signicantly higher than for the pseudo best t points for the global minimum. The number of expected signal events for the minimum in the focus point region is 0, while it is >0 for the global minimum. Pseudo best t points with smaller values of the mass parameters, in particular pseudo best t points in the
-coannihilation region, tend to predict an expected number of signal events larger than zero. Since for the pseudo measurements based on the minimum in the focus point region an expectation of 0 is assumed, this naturally leads to a larger 2 contribution from the ATLAS 0- analysis. The effect on the distribution of the total 2 is shown in Fig. 14. Another reason might be that the focus point region is sampled more coarsely than the region around the global minimum. This increases the probability that the t of the pseudo dataset misses the actual best t point, due to our procedure of using only the points in the original MCMC. This effect should
however only play a minor role, since the parameter space is still nely scanned and only a negligible fraction of scan points are chosen numerous times as best t points in the pseudo data ts.
To further verify that this effect is not only caused by the coarser sampling in the focus point region, we performed another set of 500 pseudo ts based on the global minimum, reducing the point density in the
123
Eur. Phys. J. C (2016) 76:96 Page 17 of 22 96
To study the impact of the Higgs rates on the p value, and in order to compare to the observable sets used by other t collaborations, which exclude the Higgs rate measurements from the t on the basis that in the decoupling regime they do not play a vital role, we perform toy ts for the observable set without Higgs rates and derive a p value of (1.3 0.4) %. In
the decoupling limit, the cMSSM predictions for the Higgs rates are very close to the SM, so that the LHC is not able to distinguish between the two models based on Higgs rates measurements (see Fig. 8). Because of the overall good agreement between the Higgs rate measurements and the SM prediction, the inclusion of the Higgs rates in the t improves the t quality despite some tension between the ATLAS and CMS measurements.
As discussed in Sect. 2, it is important to understand the impact of the parametrisation of the measurements on the p value. To do so, we compare our baseline t with two more extreme choices. First, we use the Small Observable Set which combines h , h Z Z, and h W W
measurements but keeps ATLAS and CMS measurements separately. We use this choice because an ofcial ATLAS combination is available. The equivalent corresponding CMS combination is produced independently by us. Using this observable set we get a p value of (1.9 0.4) %. Here the
cMSSM receives a 2 penalty from the trend of the ATLAS signal strength measurements to values 1 and of the
CMS measurements towards 1 in the three h V V
channels.
As a cross-check, we employ the large observable set, which contains all available sub-channel measurements separately. Using this observable set, we get a p value from the pseudo data ts of (41.6 4.4) %. As observed in Sect. 4.1,
the large observable set yields the same preferred parameter region as the small, medium and combined observable sets. Yet, its p value from the pseudo data ts signicantly differs.
To explain this interesting result we consider a simplied example: for i = 1, . . . , N, let xi be Gaussian measurements
with uncertainties i and corresponding model predictions ai(P) for a given parameter point P. We assume that the measurements from xn to xN are uncombined measurements of the same observable; then ai = an for all i n. There
are now two obvious possibilities to compare measurements and predictions:
We can compare each of the individual measurements with the corresponding model predictions by calculating
2split =
tive of the question whether they measure independent quantities in the model or not. One example for such a situation would be the large observable set of Higgs signal strength measurements, where several observables measure different detector effects, but the same physics. We can rst combine the measurements xi, i n to a
measurement x which minimises the function
f (x) =
N
[summationdisplay]
i=n
xi x i
2
(9)
and has an uncertainty of
and then use this combination
to calculate
2combined =
n1
[summationdisplay]
i=1
xi ai
i
+
2 [parenleftbigg] x an
2 . (10)
This situation now corresponds to rst calculating one physically meaningful quantity (e.g. a common signal strength for h in all VBF categories, and all gg
h categories) and only then to confront the model to the combined measurement.
Plugging in the explicit expressions for ( x,
), using 1/
2 =
Ni=n(1/2i) and dening 2data = f ( x) one nds
2combined = 2split 2data. (11)
Hence doing the combination of the measurements before the t is equivalent to using a 2-difference which in turn is equivalent to the usage of a log-likelihood ratio. The numerator of this ratio is given by the likelihood Lmodel of the
model under study, e.g. the cMSSM. The denominator is given by the maximum of a phenomenological likelihood
Lpheno which depends directly on the model predictions ai. These possess an expression as functions ai(P) of the model parameters P of Lmodel. Note that in Lpheno however, the
ai are treated as n independent parameters. We now identify 2split 2 ln Lmodel and 2data 2 ln Lpheno. When
inserting ai(P), one is guaranteed to nd
Lpheno(a1(P), . . . , an(P)) = Lmodel(P). (12)
Using these symbols, 2combined can be written as
2combined = 2 ln L
model(P)
Lpheno(1, . . . ,n)
2
.
N
[summationdisplay]
i=1
xi aii
, (13)
where1, . . . ,n maximise Lpheno. Note that in this formu
lation the model predictions ai do not necessarily need to correspond directly to measurements used in the t, as it is the case for our example. For instance the model predictions ai might contain cross sections and branching ratios which are constrained by rate measurements.
This would correspond to an approach where the model is confronted with all available observables, irrespec-
123
96 Page 18 of 22 Eur. Phys. J. C (2016) 76:96
arbitrary units
1
0.8
SUSY FITTINO
LSUSY
SUSY
0.6
split measurements combined measurements
0.4
0.2
0 0 5 10 15 20 25 30 35
/ndf
Fig. 15 Numerical example showing the distribution of 2/ndf using combined and split measurements. Using split measurements smaller values of 2/ndf are obtained. Because in this example all measurements are Gaussian, this is equivalent to larger p values. We call the effect of obtaining larger p values when using split measurements the dilution of the p value
Using ndfsplit = N, ndfcombined = n and ndfdata = N n
Eq. (11) implies
2split
ndfsplit =
2combined
ndfcombined + N n +
2data ndfdata + n
. (14)
The more uncombined measurements are used, the larger N
n gets and the less the p value depends on the rst term on the right hand side, which measures the agreement between data and model. At the same time, the p value depends more on the second term on the right hand which measures the agreement within the data. Especially, for n xed and N :
2split
ndfsplit =
2datandfdata . (15)
Since in the case of purely statistical uctuations of the split measurements around the combined value the agreement within the data is unlikely to be poor, the expectation is
2datandfdata 1 (16)
even if there was a deviation between the model predictions and the physical combined observables. So most of the time the p value will get larger when uncombined measurements are used, hiding deviations between model and data. As a numerical example Fig. 15 shows toy distributions of
2combined
ndfcombined and
2split
ndfsplit for one observable (n = 1), ten measure
ments (N = 10) and a 3 deviation between the true value
and the model prediction. We call this effect dilution of the p-value. It explains the large p value for the large observable set by the overall good agreement between the uncombined measurements.
On the other hand if there is some tension within the data, which might in this hypothetical example be caused purely by statistical or experimental effects, the innocent model is punished for these internal inconsistencies of the
data. This is observed here for the medium observable set and small observable set. Hence, and in order to incorporate our assumption that ATLAS and CMS measured the same Higgs boson, we produce our own combination of corresponding ATLAS and CMS Higgs mass and rate measurements. We also assume that custodial symmetry is preserved but do not assume that h is connected to h W W
and h Z Z as in the ofcial ATLAS combination used
in the small observable set. We call the resulting observable set combined observable set. Note that for simplicity we also combine channels for which the cMSSM model predictions might differ due to different efciencies for the different Higgs production channels. This could be improved in a more rigorous treatment. For instance the 2 could be dened by Eq. (13) using a likelihood Lpheno which contains
both the different Higgs production cross sections and the different Higgs branching ratios as free parameters ai.
Using the combined observable set we get a p value of (8.3 0.8) %, which is signicantly smaller than the diluted
p value of (41.6 4.4) % for the large observable set. The
good agreement within the data now shows up in a small 2/ndf of 68.1/65 for the combination but no longer affects the p value of the model t. On the other hand the p value for the combined observable set is larger than the one for the medium observable set, because the tension between the ATLAS and CMS measurements is not included. This tension can be quantied by producing an equivalent ATLAS and CMS combination not from the large observable set but from the medium observable set giving a relatively bad 2/ndf of 10.9/6.
Finally, as discussed briey in Sect. 2, we employ the medium observable set again to compare the preferred parts of the parameter space according to the prole likelihood technique (Fig. 2b) with the parameter ranges that are preferred according to our pseudo ts. In Fig. 16a, b we show the 1-dimensional distributions of the pseudo best t values for M0 and M1/2. The 68 and 95 % CL regions according to the total pseudo best t 2 are shown. As expected by the non-Gaussian behaviour of our t, some differences between the results obtained by the prole likelihood technique and the pseudo t results can be observed. For the pseudo ts, in both parameters M0 and M1/2, the 95 % CL range is slightly smaller compared to the allowed range according to the prole likelihood. Considering the ts of the pseudo datasets, M0 is limited to values <8.5 TeV and M1/2 is limited to values <2.7 TeV, while the prole likelihood technique yields upper limits of 10 and 3.5 TeV, roughly. The differences are relatively small compared to the size of the preferred parameter space, and may well be an effect of the limited number of pseudo datasets that have been considered; the use of the prole likelihood technique for the derivation of the preferred parameter space can therefore be considered to give reliable results. However, as discussed above, in order to get an accu-
123
Eur. Phys. J. C (2016) 76:96 Page 19 of 22 96
0.45
cMSSM
0.4
cMSSM
fractions
0.4
fractions
toy fits
1
2best fit point 918 GeV+ 170 GeV- 450 GeV
0.35
0.35
0.3
toy fits
1
2best fit point 387 GeV+ 950 GeV- 170 GeV
0.3
0.25
0.25
0.2
0.2
0.15
0.15
0.1
0.1
0.05
0.05
0 0 2000 4000 6000 8000 10000
0 0 500 1000 1500 2000 2500 3000 3500
M
[GeV]
0
M
[GeV]
1/2
(a)
(b)
Fig. 16 Distribution of the pseudo best t values for a M0 and b M1/2
rate estimate for the 95 %-CL regions, the p value would have to be evaluated at every single point in the parameter space.
5 Conclusions
In this paper we present what we consider the nal analysis of the cMSSM in light of the LHC Run1 with the program Fittino.
In previous iterations of such a global analysis of the cMSSM, or any other more general SUSY model, the focus was set on nding regions in parameter space that globally agree with a certain set of measurements, either using frequentist or bayesian statistics. However, these analyses show that a constrained model such as the cMSSM has become rather trivial: because of the decoupling behaviour at sufciently high SUSY mass scales the phenomenology resembles that of the SM with dark matter. This does however not disqualify the cMSSM as a valid model of Nature. In addition, there are undeniable ne-tuning challenges, but also these do not statistically disqualify the model. Therefore, before abandoning the cMSSM, we apply one crucial test, which has not been performed before: we calculate the p value of the cMSSM through toy tests.
A likelihood ratio test of the cMSSM against the SM would be meaningless, since the SM cannot acommodate dark matter. Thus we apply a goodness-of-t test of the cMSSM. As in every likelihood test (also in likelihood ratio tests), the sensitivity of the test towards the validity of the model depends on the number of degrees of freedom in the observable set, while the sensitivity towards the preferred parameter range does not. Thus, when calculating the p value of the cMSSM, it is important that the observable set is chosen carefully. First, only such observables should be considered for which the cMSSM predictions are, in principle, sensitive to the choice of the model parameters, independent of the actually measured values of the observables. This excludes e.g. many LEP/SLD precision observables, for which the cMSSM by construction always predicts the SM value for
any parameter value. Second, it is important that observables are combined whenever the corresponding cMSSM predictions are not independent. Otherwise the resulting p value would be too large by construction. It should be noted that the allowed parameter space for all observable sets studied here is virtually identical. It is only the impact on the p value which varies.
In order to study this dependence, several observable sets are studied. The main challenge arises from the Higgs rate measurements. Since the cMSSM Higgs rate predictions are, in principle, very sensitive to the choice of model parameters, the corresponding measurements have to be included in a global t. Using the preferred observable sets combined and medium (as described in Sect. 4.3 and Table 8), we calculate a p value of the cMSSM of 4.9 and 8.4 %, respectively. In addition, the cMSSM is excluded at the 98.7 %CL if Higgs rate measurements are omitted. The main reason for these low p values is the tension between the direct sparticle search limits from the LHC and the measured value of the muon anomalous magnetic moment (g2). If e.g. (g2)
is removed from the t, the p value of the cMSSM increases to about 50 %. However, there is no justication for arbitrarily removing one variable a posteriori. On the other hand, the observable sets could be articially chosen to be too detailed, such that there are many measurements for which the model predictions cannot be varied individually. This is the case for the large observable set of Higgs rate observables in Table 8, the inclusion of which does thus not represent a methodologically stringent test of the p value of the cMSSM.
Thus, the main result of this analysis is that the cMSSM is excluded at least at the 90 % CL for reasonable choices of the observable set.
The best-t point is in the
-coannihilation region at M0 500 GeV, with a secondary minimum in the focus-
point region at M1/2, M0 2 TeV. A comparison of the p
values of coannihilation and focus-point regions can serve as an estimate of a likelihood-ratio test between a cMSSM at M0 500 GeV which can be tested at the LHC, and a SM
123
96 Page 20 of 22 Eur. Phys. J. C (2016) 76:96
with dark matter with squark and gluino masses beyond about 5TeV. Since the focus point manifests a more linear relation between observables and input parameters in the toy ts, and thus a more 2-distribution like behaviour, it reaches a slightly higher p value than the
-coannihilation region.
This shows that even the best-t region offers no statistically relevant advantage over the SM with dark matter. Thus, we can conclude that the cMSSM is not only excluded at the 9095 %CL, but that it is also statistically mostly indistinguishable from a hypothetical SM with dark matter.
In addition to this main result, we apply the rst complete scan of the possibility of the existence of charge or colour-breaking minima within a global t of the cMSSM. In addition, we calculate the lifetime of the best t points. We nd that the focus-point best-t-region is stable, while the
-
coannihilation best-t region is either stable or metastable, with a lifetime signicantly longer than the age of the Universe.
It is important to note that the exclusion of the cMSSM at the 90 %CL or more does in general not apply to less restricted SUSY models. The combination of measurements requiring low slepton and gaugino mass scales, such as (g
2), and the high mass scales preferred by the SM-like Higgs and the non-observation of coloured sparticles at the LHC puts the cMSSM under extreme tension. In the cMSSM these mass scales are connected. A more general SUSY model where these scales are decoupled, and preferably also with a complete decoupling of the third generation sleptons and squarks from the rst and second generation, would easily circumvent this tension.
Therefore, the future of SUSY searches at the LHC should emphasize the coverage of any phenomenological scenario which allows sleptons, and preferably also third generation squarks, to remain light, while the other sparticles can become heavy. Many loopholes with light SUSY states still exist, as analyses as in [164] show, and there exist potentially promising experimental anomalies which could be explained by more general SUSY models [165167].
On the other hand, the analysis presented here shows that SUSY does not directly point towards a non-SM-like light Higgs boson. The uncertainty on the predictions of ratios of partial decay widths and other observables at the LHC are signicantly smaller than the direct uncertainty of the LHC Higgs rate measurements. This is because of the high SUSY mass scale, also for third generation squarks, imposed by the combination of the cMSSM and the direct SUSY particle search limits. These do not allow the model to vary the light Higgs boson properties sufciently to make use of the experimental uncertainty in the Higgs rate measurements. This might change for a more general SUSY model, but there is no direct hint in this direction. The predicted level of deviation of the light Higgs boson properties from the SM prediction at
the O(1 %) level is not accessible even at a high-luminosity LHC and requires an e+e collider.
In summary, we nd that the undeniable freedom in choosing the observable set before looking at the experimental values of the results introduces a remaining softness into the exclusion of the cMSSM. Therefore, while we might have preferred to nd SUSY experimentally, we nd that at least we can almost complete the second most revered task of a physics measurement: with the combination of astrophysical, precision collider and energy frontier measurements in a global frequentist analysis we (softly) kill the cMSSM.
Acknowledgments We are grateful to Roberta Flack, Norman Gimbel and Charles Fox for indespensable inspiration. We thank Sven Heinemeyer and Thomas Hahn for very helpful discussions during the preparations of the Higgs boson decay rate calculations. This work was supported by the Deutsche Forschungsgemeinschaft through the research Grant HA 7178/1-1, by the US Department of Energy Grant Number DE-FG02-04ER41286, by the BMBF Theorieverbund and the BMBF-FSP 101 and in part by the Helmholtz Alliance Physics at the Terascale. T.S. is supported in part by a Feodor-Lynen research fellowship sponsored by the Alexander von Humboldt Foundation. We also thank the Helmholtz Alliance and DESY for providing Computing Infrastructure at the National Analysis Facility.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/
Web End =http://creativecomm http://creativecommons.org/licenses/by/4.0/
Web End =ons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Funded by SCOAP3.
References
1. J. Wess, B. Zumino, Nucl. Phys. B 70, 39 (1974)2. Y. Golfand, E. Likhtman, JETP Lett. 13, 323 (1971)3. R. Haag, J.T. Lopuszanski, M. Sohnius, Nucl. Phys. B 88, 257 (1975)
4. P. Langacker, M.x. Luo. Phys. Rev. D 44, 817 (1991)5. U. Amaldi, W. de Boer, H. Furstenau, Phys. Lett. B 260, 447 (1991)
6. J.R. Ellis, S. Kelley, D.V. Nanopoulos, Phys. Lett. B 260, 131 (1991)
7. H.P. Nilles, M. Srednicki, D. Wyler, Phys. Lett. B 120, 346 (1983)8. L.E. Ibanez, G.G. Ross, Phys. Lett. B 110, 215 (1982)9. E. Gildener, Phys. Rev. D 14, 1667 (1976)10. M. Veltman, Acta Phys. Polon. B 12, 437 (1981)11. R. Barbieri, G. Giudice, Nucl. Phys. B 306, 63 (1988)12. P. Fayet, Phys. Lett. B 69, 489 (1977)13. G.R. Farrar, P. Fayet, Phys. Lett. B 76, 575 (1978)14. H. Goldberg, Phys. Rev. Lett. 50, 1419 (1983)15. J.R. Ellis, J. Hagelin, D.V. Nanopoulos, K.A. Olive, M. Srednicki, Nucl. Phys. B 238, 453 (1984)
16. H.P. Nilles, Phys. Rept. 110, 1 (1984)17. S.P. Martin, A Supersymmetry primer. [Adv. Ser. Direct. High Energy Phys. 18,1(1998)]
18. P. Fayet, Nucl. Phys. B 90, 104 (1975)19. P. Fayet, J. Iliopoulos, Phys. Lett. B 51, 461 (1974)20. P. Fayet, Phys. Lett. B 64, 159 (1976)21. S. Ferrara, L. Girardello, F. Palumbo, Phys. Rev. D 20, 403 (1979)22. D.Z. Freedman, P. van Nieuwenhuizen, S. Ferrara, Phys. Rev. D 13, 3214 (1976)
123
Eur. Phys. J. C (2016) 76:96 Page 21 of 22 96
23. E. Cremmer, B. Julia, J. Scherk, S. Ferrara, L. Girardello, P. van Nieuwenhuizen, Nucl. Phys. B 147, 105 (1979)
24. E. Cremmer, S. Ferrara, L. Girardello, A. Van Proeyen, Nucl.
Phys. B 212, 413 (1983)
25. E. Witten, Nucl. Phys. B 188, 513 (1981)26. M. Dine, A.E. Nelson, Phys. Rev. D 48, 1277 (1993)27. G.F. Giudice, R. Rattazzi, Phys. Rept. 322, 419 (1999)28. M. Drees, M.M. Nojiri, Phys. Rev. D 47, 376 (1993)29. G.L. Kane, C.F. Kolda, L. Roszkowski, J.D. Wells, Phys. Rev. D 49, 6173 (1994)
30. L. Girardello, M.T. Grisaru, Nucl. Phys. B 194, 65 (1982)31. S. Dimopoulos, H. Georgi, Nucl. Phys. B 193, 150 (1981)32. H.P. Nilles, Phys. Lett. B 115, 193 (1982)33. R. Barbieri, S. Ferrara, C.A. Savoy, Phys. Lett. B 119, 343 (1982)34. A.H. Chamseddine, R.L. Arnowitt, P. Nath, Phys. Rev. Lett. 49, 970 (1982)
35. S.K. Soni, H.A. Weldon, Phys. Lett. B 126, 215 (1983)36. L.J. Hall, J.D. Lykken, S. Weinberg, Phys. Rev. D 27, 2359 (1983)37. S.S. AbdusSalam et al., Eur. Phys. J. C 71, 1835 (2011)38. P. Bechtle, K. Desch, M. Uhlenbrock, P. Wienemann, Eur. Phys.J. C 66, 215 (2010)39. P. Bechtle, B. Sarrazin, K. Desch, H.K. Dreiner, P. Wienemann et al., Phys. Rev. D 84, 011701 (2011)
40. P. Bechtle, T. Bringmann, K. Desch, H. Dreiner, M. Hamer et al., JHEP 1206, 098 (2012)
41. P. Bechtle, K. Desch, H.K. Dreiner, M. Hamer, M. Krmer, et al., PoS EPS-HEP2013, 313 (2013)
42. O. Buchmueller, R. Cavanaugh, D. Colling, A. De Roeck, M.
Dolan et al., Eur. Phys. J. C 71, 1583 (2011)
43. O. Buchmueller, R. Cavanaugh, D. Colling, A. de Roeck, M.
Dolan et al., Eur. Phys. J. C 71, 1634 (2011)
44. O. Buchmueller, R. Cavanaugh, D. Colling, A. De Roeck, M.
Dolan et al., Eur. Phys. J. C 71, 1722 (2011)
45. O. Buchmueller, R. Cavanaugh, A. De Roeck, M. Dolan, J. Ellis et al., Eur. Phys. J. C 72, 1878 (2012)
46. O. Buchmueller, R. Cavanaugh, A. De Roeck, M. Dolan, J. Ellis et al., Eur. Phys. J. C 72, 2020 (2012)
47. O. Buchmueller, R. Cavanaugh, M. Citron, A. De Roeck, M.
Dolan et al., Eur. Phys. J. C 72, 2243 (2012)
48. O. Buchmueller, M. Dolan, J. Ellis, T. Hahn, S. Heinemeyer et al., Eur. Phys. J. C 74, 2809 (2014)
49. O. Buchmueller, R. Cavanaugh, A. De Roeck, M. Dolan, J. Ellis et al., Eur. Phys. J. C 74, 2922 (2014)
50. O. Buchmueller et al., Eur. Phys. J. C 74(12), 3212 (2014)51. K.J. de Vries et al., Eur. Phys. J. C 75(9), 422 (2015)52. E.A. Bagnaschi et al., Eur. Phys. J. C 75, 500 (2015)53. J. Ellis, Eur. Phys. J. C 74, 2732 (2014)54. G. Bertone, D.G. Cerdeno, M. Fornasa, R. Ruiz de Austri, C.
Strege et al., JCAP 1201, 015 (2012)
55. C. Strege, G. Bertone, D. Cerdeno, M. Fornasa, R. Ruiz de Austri et al., JCAP 1203, 030 (2012)
56. C. Strege, G. Bertone, F. Feroz, M. Fornasa, R. Ruiz de Austri et al., JCAP 1304, 013 (2013)
57. C. Balazs, A. Buckley, D. Carter, B. Farmer, M. White, Eur. Phys.J. C 73, 2563 (2013)58. C. Beskidt, W. de Boer, D. Kazakov, F. Ratnikov, JHEP 1205, 094 (2012)
59. C. Beskidt, W. de Boer, D.I. Kazakov, Phys. Lett. B 738, 505 (2014)
60. R. Lafaye, T. Plehn, M. Rauch, D. Zerwas, Eur. Phys. J. C 54, 617 (2008)
61. C. Adam, J.L. Kneur, R. Lafaye, T. Plehn, M. Rauch et al., Eur.
Phys. J. C 71, 1520 (2011)
62. S. Henrot-Versill, R. Lafaye, T. Plehn, M. Rauch, D. Zerwas et al., Phys. Rev. D 89, 055017 (2014)
63. B.C. Allanach, K. Cranmer, C.G. Lester, A.M. Weber, JHEP 0708, 023 (2007)
64. B. Allanach, Phys. Rev. D 83, 095019 (2011)65. B. Allanach, T. Khoo, C. Lester, S. Williams, JHEP 1106, 035 (2011)
66. R.R. de Austri, R. Trotta, L. Roszkowski, JHEP 0605, 002 (2006)67. A. Fowlie, A. Kalinowski, M. Kazana, L. Roszkowski, Y.S. Tsai, Phys. Rev. D 85, 075012 (2012)
68. L. Roszkowski, E.M. Sessolo, Y.L.S. Tsai, Phys. Rev. D 86, 095005 (2012)
69. A. Fowlie, M. Kazana, K. Kowalska, S. Munir, L. Roszkowski et al., Phys. Rev. D 86, 075010 (2012)
70. K. Kowalska et al., Phys. Rev. D 87(11), 115010 (2013)71. K. Kowalska, L. Roszkowski, E.M. Sessolo, JHEP 1306, 078 (2013)
72. K. Kowalska, L. Roszkowski, E.M. Sessolo, S. Trojanowski, JHEP 1404, 166 (2014)
73. L. Roszkowski, E.M. Sessolo, A.J. Williams, JHEP 1408, 067 (2014)
74. K. Kowalska, L. Roszkowski, E.M. Sessolo, S. Trojanowski, A.J. Williams, Looking for supersymmetry: 1 Tev WIMP and the power of complementarity in LHC and dark matter searches. (2015), http://inspirehep.net/record/1385134/files/arXiv:1507.07446.pdf
Web End =http://inspirehep.net/record/1385134/les/ http://inspirehep.net/record/1385134/files/arXiv:1507.07446.pdf
Web End =arXiv:1507.07446.pdf . Accessed 20 Feb 2016
75. S. Cassel, D.M. Ghilencea, S. Kraml, A. Lessa, G.G. Ross, JHEP 05, 120 (2011)
76. U. Ellwanger, G. Espitalier-Noel, C. Hugonie, JHEP 09, 105 (2011)
77. A. Kaminska, G.G. Ross, K. Schmidt-Hoberg, JHEP 11, 209 (2013)
78. D.M. Ghilencea, Phys. Rev. D 89(9), 095007 (2014)79. H. Baer, V. Barger, D. Mickelson, M. Padeffke-Kirkland, Phys. Rev. D 89(11), 115019 (2014)
80. P. Bechtle, et al., How alive is constrained SUSY really? (2014), http://inspirehep.net/record/1323294/files/arXiv:1410.6035.pdf
Web End =http://inspirehep.net/record/1323294/les/arXiv:1410.6035.pdf . Accessed 20 Feb 2016
81. A. Fowlie, Bayesian approach to investigating supersymmetric models. (2013), http://etheses.whiterose.ac.uk/id/eprint/4742
Web End =http://etheses.whiterose.ac.uk/id/eprint/4742 . Accessed 20 Feb 2016
82. A.A. Markov, Science in Context 19, 591 (2006), URL http://journals.cambridge.org/article_S0269889706001074
Web End =http:// http://journals.cambridge.org/article_S0269889706001074
Web End =journals.cambridge.org/article_S0269889706001074
83. N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller,E. Teller, J. Chem. Phys. 21, 1087 (1953)84. W.K. Hastings, Biometrika 57, 97 (1970)85. P. Bechtle, S. Heinemeyer, O. Stl, T. Stefaniak, G. Weiglein, Eur. Phys. J. C 74(2), 2711 (2014)
86. G. Aad et al., JHEP 1409, 176 (2014)87. G. Bennett et al., Phys. Rev. D 73, 072003 (2006)88. M. Davier, A. Hoecker, B. Malaescu, Z. Zhang, Eur. Phys. J. C 71, 1515 (2011)
89. S. Schael et al., Phys. Rept. 427, 257 (2006)90. ATLAS Collaboration, CDF Collaboration, CMS Collaboration, D0 Collaboration, First combination of Tevatron and LHC measurements of the top-quark mass. (2014). http://arxiv.org/abs/1403.4427
Web End =arXiv:1403.4427 [hepex]
91. Tevatron Electroweak Working Group, 2012 Update of the Combination of CDF and D0 Results for the Mass of the W Boson. (2012). http://arxiv.org/abs/1204.0042
Web End =arXiv:1204.0042 [hep-ex]
92. J. Beringer et al., Phys. Rev. D 86, 010001 (2012)93. CMS and LHCb Collaborations, Combination of results on the rare decays B0(s) + from the CMS and LHCb experi
ments. (2013). CMS-PAS-BPH-13-007, LHCb-CONF-2013-012, CERN-LHCb-CONF-2013-01294. Y. Amhis, et al., Averages of B-Hadron, C-Hadron, and tau-lepton properties as of early 2012. (2012). http://arxiv.org/abs/1207.1158
Web End =arXiv:1207.1158 [hep-ex]
95. P.A.R. Ade et al., Astron. Astrophys. 571, A16 (2014)
123
96 Page 22 of 22 Eur. Phys. J. C (2016) 76:96
96. D. Akerib et al., Phys. Rev. Lett. 112, 091303 (2014)97. ALEPH, DELPHI, L3 and OPAL collaborations, (2001), http://lepsusy.web.cern.ch/lepsusy/www/inos_moriond01/charginos_pub.html
Web End =http:// http://lepsusy.web.cern.ch/lepsusy/www/inos_moriond01/charginos_pub.html
Web End =lepsusy.web.cern.ch/lepsusy/www/inos_moriond01/charginos_ http://lepsusy.web.cern.ch/lepsusy/www/inos_moriond01/charginos_pub.html
Web End =pub.html
98. P. Bechtle, O. Brein, S. Heinemeyer, G. Weiglein, K.E. Williams, Comput. Phys. Commun. 181, 138 (2010)
99. P. Bechtle, O. Brein, S. Heinemeyer, G. Weiglein, K.E. Williams, Comput. Phys. Commun. 182, 2605 (2011)
100. P. Bechtle, O. Brein, S. Heinemeyer, O. Stl, T. Stefaniak, et al.,
PoS CHARGED2012, 024 (2012)101. P. Bechtle, O. Brein, S. Heinemeyer, O. Stl, T. Stefaniak et al.,
Eur. Phys. J. C 74(3), 2693 (2014)102. P. Bechtle, S. Heinemeyer, O. Stl, T. Stefaniak, G. Weiglein,
JHEP 1411, 039 (2014)103. G. Aad et al., Phys. Rev. Lett. 114, 191803 (2015)104. G. Aad et al., Phys. Lett. B 726, 88 (2013)105. S. Chatrchyan et al., Phys. Rev. D 89, 092007 (2014)106. CMS Collaboration, Combination of standard model Higgs boson searches and measurements of the properties of the new boson with a mass near 125 GeV. CMS-PAS-HIG-13-005 (2013)107. ATLAS Collaboration, Search for the Standard Model Higgs boson in H->tau tau decays in proton-proton collisions with the ATLAS detector. ATLAS-CONF-2012-160, ATLAS-COMCONF-2012-196 (2012)108. ATLAS collaboration, Search for the bb decay of the Standard Model Higgs boson in associated W/ZH production with the ATLAS detector. ATLAS-CONF-2013-079, ATLAS-COMCONF-2013-080 (2013)109. S. Chatrchyan et al., JHEP 1401, 096 (2014)110. S. Chatrchyan et al., Nature Phys. 10, 557 (2014)111. CMS Collaboration, Updated measurements of the Higgs boson at 125 GeV in the two photon decay channel. CMS-PAS-HIG-13-001 (2013)112. W. Porod, Comput. Phys. Commun. 153, 275 (2003)113. W. Porod, F. Staub, Comput. Phys. Commun. 183, 2458 (2012) 114. S.P. Martin, M.T. Vaughn, Phys. Rev. D 50, 2282 (1994)115. D.M. Pierce, J.A. Bagger, K.T. Matchev, R.j. Zhang. Nucl. Phys.
B491, 3 (1997)116. S. Heinemeyer, W. Hollik, G. Weiglein, Comput. Phys. Commun.
124, 76 (2000)117. T. Hahn, S. Heinemeyer, W. Hollik, H. Rzehak, G. Weiglein, Phys.
Rev. Lett. 112(14), 141801 (2014)118. G. Degrassi, S. Heinemeyer, W. Hollik, P. Slavich, G. Weiglein,
Eur. Phys. J. C 28, 133 (2003)119. B. Allanach, A. Djouadi, J. Kneur, W. Porod, P. Slavich, JHEP
0409, 044 (2004)
120. S. Heinemeyer, W. Hollik, H. Rzehak, G. Weiglein, Eur. Phys. J.
C 39, 465 (2005)121. F. Mahmoudi, Comput. Phys. Commun. 180, 1579 (2009)122. W. Porod, F. Staub, A. Vicente, Eur. Phys. J. C 74(8), 2992 (2014) 123. G. Belanger, F. Boudjema, A. Pukhov, A. Semenov, Comput.
Phys. Commun. 149, 103 (2002)124. G. Belanger, F. Boudjema, A. Pukhov, A. Semenov, Comput.
Phys. Commun. 174, 577 (2006)125. P. Gondolo, J. Edsjo, P. Ullio, L. Bergstrom, M. Schelke et al.,
JCAP 0407, 008 (2004)126. N. Nguyen, D. Horns, T. Bringmann, AstroFit: An Interface Program for Exploring Complementarity in Dark Matter Research. (2012). http://arxiv.org/abs/1202.1385
Web End =arXiv:1202.1385 [astro-ph]127. M. Bahr, S. Gieseke, M. Gigg, D. Grellscheid, K. Hamilton et al.,
Eur. Phys. J. C 58, 639 (2008)128. S. Ovyn, X. Rouby, V. Lemaitre, DELPHES, a framework for fast simulation of a generic collider experiment. (2009).arXiv:0903.2225 [hep-ph]
129. W. Beenakker, R. Hopker, M. Spira, PROSPINO: A Program for the production of supersymmetric particles in next-to-leading order QCD. (1996).arXiv:hep-ph/9611232
130. W. Beenakker, R. Hopker, M. Spira, P.M. Zerwas, Nucl. Phys. B
492, 51 (1997)131. W. Beenakker, M. Krmer, T. Plehn, M. Spira, P.M. Zerwas, Nucl.
Phys. B 515, 3 (1998)132. A. Kulesza, L. Motyka, Phys. Rev. Lett. 102, 111802 (2009) 133. W. Beenakker, S. Brensing, M. Krmer, A. Kulesza, E. Laenen,I. Niessen, JHEP 12, 041 (2009)134. W. Beenakker, S. Brensing, M. Krmer, A. Kulesza, E. Laenen,I. Niessen, JHEP 08, 098 (2010)135. W. Beenakker, S. Brensing, M. Krmer, A. Kulesza, E. Laenen,L. Motyka, I. Niessen, Int. J. Mod. Phys. A 26, 2637 (2011) 136. M. Krmer, A. Kulesza, R. van der Leeuw, M. Mangano, S. Padhi,T. Plehn, X. Portell, (2012)137. S. Heinemeyer, et al., Handbook of LHC Higgs Cross Sections:3. Higgs Properties. (2013). http://arxiv.org/abs/1307.1347
Web End =arXiv:1307.1347 [hep-ph]138. T. Hahn, S. Heinemeyer. private communication139. G. Aad et al., JHEP 04, 116 (2015)140. G. Aad et al., JHEP 10, 24 (2014)141. E. Aprile, XENON1T collaboration, The XENON1T Dark Matter
Search Experiment. (2012). http://arxiv.org/abs/1206.6288
Web End =arXiv:1206.6288 [astro-ph]142. J.M. Frere, D.R.T. Jones, S. Raby, Nucl. Phys. B 222, 11 (1983) 143. M. Claudson, L.J. Hall, I. Hinchliffe, Nucl. Phys. B 228, 501
(1983)144. J.P. Derendinger, C.A. Savoy, Nucl. Phys. B 237, 307 (1984) 145. R. Auzzi, A. Giveon, S.B. Gudnason, T. Shacham, JHEP 01, 169
(2013)146. J.A. Evans, Y. Kats, JHEP 04, 028 (2013)147. N. Chamoun, H.K. Dreiner, F. Staub, T. Stefaniak, JHEP 08, 142
(2014)148. F. Staub, P. Athron, U. Ellwanger, R. Grober, M. Muhlleitner, P.
Slavich, A. Voigt, (2015)149. J. Camargo-Molina, B. OLeary, W. Porod, F. Staub, JHEP 1312,
103 (2013)150. N. Blinov, D.E. Morrissey, JHEP 03, 106 (2014)151. D. Chowdhury, R.M. Godbole, K.A. Mohan, S.K. Vempati, JHEP
02, 110 (2014)
152. J. Camargo-Molina, B. OLeary, W. Porod, F. Staub, Vevacious:
A Tool For Finding The Global Minima Of One-Loop Effective Potentials With Many Scalars. (2013).arXiv:1307.1477 [hep-ph] 153. L.E. Ibanez, Nucl. Phys. B 218, 514 (1983)154. L. Alvarez-Gaume, J. Polchinski, M.B. Wise, Nucl. Phys. B 221,
495 (1983)155. C. Kounnas, A.B. Lahanas, D.V. Nanopoulos, M. Quiros, Nucl.
Phys. B 236, 438 (1984)156. L.E. Ibanez, C. Lopez, C. Munoz, Nucl. Phys. B 256, 218 (1985) 157. J. Casas, A. Lleyda, C. Muoz, Nucl. Phys. B 471, 3 (1996) 158. J. Gunion, H. Haber, M. Sher, Nucl. Phys. B 306, 1 (1988)159. G. Isidori, G. Ridol, A. Strumia, Nucl. Phys. B 609, 387 (2001) 160. S.R. Coleman, Phys. Rev. D 15, 2929 (1977)161. J. Callan, Curtis G., S.R. Coleman. Phys. Rev. D 16, 1762 (1977) 162. C.L. Wainwright, Comput. Phys. Commun. 183, 2006 (2012) 163. J. Camargo-Molina, B. Garbrecht, B. OLeary, W. Porod, F. Staub,
Phys. Lett. B 737, 156 (2014)164. CMS Collaboration, Phenomenological MSSM interpretation of the CMS 7 and 8 TeV results.CMS-PAS-SUS-13-020 (2014) 165. V. Khachatryan et al., JHEP 04, 124 (2015)166. G. Aad et al., Eur. Phys. J. C 75(7), 318 (2015)167. ATLAS Collaboration, Measurement of the W+W production cross section in proton-proton collisions at s = 8 TeV with the
ATLAS detector. ATLAS-CONF-2014-033 (2014)
123
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
The European Physical Journal C is a copyright of Springer, 2016.
Abstract
We investigate the constrained Minimal Supersymmetric Standard Model (cMSSM) in the light of constraining experimental and observational data from precision measurements, astrophysics, direct supersymmetry searches at the LHC and measurements of the properties of the Higgs boson, by means of a global fit using the program Fittino. As in previous studies, we find rather poor agreement of the best fit point with the global data. We also investigate the stability of the electro-weak vacuum in the preferred region of parameter space around the best fit point. We find that the vacuum is metastable, with a lifetime significantly longer than the age of the Universe. For the first time in a global fit of supersymmetry, we employ a consistent methodology to evaluate the goodness-of-fit of the cMSSM in a frequentist approach by deriving p values from large sets of toy experiments. We analyse analytically and quantitatively the impact of the choice of the observable set on the p value, and in particular its dilution when confronting the model with a large number of barely constraining measurements. Finally, for the preferred sets of observables, we obtain p values for the cMSSM below 10 %, i.e. we exclude the cMSSM as a model at the 90 % confidence level.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer