1. Introduction
Global climate change is projected to have a variety of local-to-regional scale impacts on human societies and ecosystems. The severity of these impacts (risk magnitude) depends upon the extent to which nations at the global scale mitigate Green House Gas (GHG) emissions through more effective implementation of climate policies [1]. While a flurry of studies and Integrated Assessment Models (IAMs) have independently investigated the impacts of switching mitigation policies in response to different climate scenarios, little is understood about the feedback effect of how human risk perceptions of climate change could contribute to switching climate policies for a substantive reduction in GHGs [2,3,4]. Standard Global Climate Models assume a disconnect between risk/adaptation and mitigation policies. IAMs typically ignore human risk perceptions and focus on economic dynamics to predict the effect of adaption on mitigation policies [5,6,7]. While public opinion is a strong driver of policy change in democratic societies [8,9], the complex interactions of climate risk perceptions, beliefs about climate science, and their combined effects on support for policies designed to mitigate climate change are not very well understood. A recent systematic review of 46 empirical studies [10], for example, found that factors influencing policy support could be divided into three general categories: (i) social psychological factors such as risk perceptions, beliefs, political ideology, values, knowledge, and emotions [11,12,13,14,15,16]; (ii) perception of climate policy and its design, such as carrots vs. sticks, perceived efficacy, and fairness [11,17,18,19]; and (iii) contextual factors such as trust, norms, economic, political and geographic aspects, and media events and communications [20,21,22,23,24]. A major finding of the systematic review was that the factors within and across these three categories “cannot be considered in total isolation, as factors are connected and may interact in various ways”, yet little is known about how these factors and drivers of policy support interact [10] (page 868).
This study presents a novel machine learning approach that utilizes a probabilistic structural equation model (PSEM) to analyze the complex interactions among climate risk perceptions, beliefs about climate science, political ideology, demographic factors, and their combined effects on support for mitigation policies. With foundations in Bayesian Network theory [25,26] and information theory [27], PSEMs can use the principle of Kulback–Leibler divergence [28] to rank the relative importance of factors that explain structural drivers and dynamics of support for climate policies among different segments of populations. An advantage of this approach is that it does not require a priori assumptions about which of the many possible factors should be considered as the underlying latent variables in a model, but rather provides a formalized method to determine which is most appropriate to include.
To examine how social psychological factors such as risk perceptions, beliefs, and political ideology drive policy support, researchers frequently design surveys. Respondents answer multiple questions to measure each variable of interest, and then researchers test their hypotheses about which factors are related to higher or lower policy support. Using these survey datasets, the theoretical predictions are tested with regression modeling, mediation analyses, or relative weight analyses by estimating the relative predictive weight of each variable, e.g., [29,30,31,32] or with structural equation modeling (SEM) to test the fit of the model structure, e.g., [12,14,33]. Either way, the theory itself is used to determine the model structure that the data are fit to. Similarly, more recent meta-analyses have also merged findings from dozens of studies together to test the relationships between variables by conducting structural equation models (SEMs), for which the structure of the model was determined a priori [34,35].
Critically, however, traditional SEMs rely on a priori groupings of measured variables to analyze their combined influence as latent variable drivers of policy support. The researchers group measured variables into latent variables based on a theory, and then analyze how correlated each latent variable is with each policy outcome. As a result, these models estimate correlations between these hypothesized latent variables and policy outcomes, but do not naturally serve as independent generators of hypotheses or new theories.
To extend the capability of modeling approaches to generate new theories, we apply other methodological tools of machine learning [26,28,36,37,38,39]. Machine learning allows “generative” induction of PSEMs. That is, the models derived do not rely upon a priori assumptions about key latent variables to include but rather use the data itself to develop them. These PSEM methods can then elucidate novel patterns across the data, as measured variables from surveys may combine in unexpected ways that cross explanatory categories of existing theories to drive policy support. Applying machine learning to survey data acts as a hypothesis generator and allows us to reconceptualize, integrate, and strengthen the predictive power of existing theories to explain and predict climate policy support.
Calls to expand the methodological diversity in research on understanding public support for climate policy [40] note that methods such as machine learning have been underutilized, with some exceptions [41,42,43]. For much of the past work examining public opinions, the initial explanatory theorizing on risk perception and subsequent behavioral and policy choices assumed people were making rational, calculated decisions. They would conduct cost-benefit analyses, weighing the probability of various negative and positive outcomes, using these calculations to come to conclusions (see review [44]). However, the importance of emotions and other affective processes in risky decision-making is now quite clear (for a review, see [45]). Research on the affect heuristic finds that positive affect tends to lead people to perceive high benefits and low risk, whereas negative affect leads to the assumption of low benefits and high risks [46]. The risk-as-feelings hypothesis proposes that emotions play at least as large a role as rational accounting in making decisions about a risk [47], and research has found that these emotions often drive different decisions than the rational calculations [48]. Pre-existing fears can lead to risks being more easily amplified through media coverage and public debates about the issue [49]. Climate change risk perceptions are an area in which both affective and analytical risk processes play important roles [50,51] and both routes help predict policy support [30,32]. Analytical risk perception processes can be distinguished from affective risk perception, and under the dual processing model, both types of risk perception are used to process climate risk and determine individuals’ support or opposition for climate mitigation policies.
The machine learning PSEM approach employed in this paper can help disentangle the relative influence of analytical versus affective risk perceptions. Other work using machine learning has highlighted the insights that this approach can offer. For example, Hasanaj and Stadelmann-Steffen [41] utilized a random forest machine learning technique on Swiss and U.S. survey data to estimate the factors that best predict support for a carbon tax. They found that small variations in the risk perceptions related to climate change led to large variations in policy support, suggesting that communications aimed at emphasizing the risks from continuing climate change may help overcome people’s concerns about the costs of mitigation and the risks related to the solutions.
We analyze the complex interactions among analytical versus affective climate risk perceptions, beliefs, political ideology, demographic factors, and their combined effects on support for climate policies using a PSEM derived from the publicly available mixed pool “Climate Change in the American Mind” (CCAM) survey dataset. The CCAM dataset was collected between 2008 and 2018 (N = 22,416) from a representative sample of the U.S. population each year [52]. We provide details on data and methodology further below. The estimated PSEM enables us to generate an integrative model that combines multiple factors to explain the observed policy support. Further, we use the generative structure of measured and latent variables derived from the PSEM to estimate a standard SEM [53,54] and evaluate its fitness with the observed data. The PSEM yields an R2 of 92.2%, which is a substantial improvement over a traditional regression analysis-based study applied to the CCAM dataset [24] that identified five manifest variables to account for 51% of the variance in policy support. Novel theoretical findings derived from machine learning-derived PSEM approaches can inform the design of feedback loops in Global Climate Models (GCMs) and Integrated Assessment Models (IAMs) for dynamically updating the evolution of climate policies in response to the coevolving emergence of climate risk perceptions under different pathways of global climate change. Findings from our study demonstrate that purely economic rationality theory driven feedback embedded in GCMs and IAMs will misrepresent the evolution of national climate policies as analytical risk perceptions explain a relatively smaller fraction of variance in support for climate policy in the case of USA. The machine learning approach also demonstrates that complex interaction effects of belief states combined with analytical and affective risk perceptions, as well as political ideology, party, and race, will need to be considered for informing the design of feedback loops in GCMs and IAMs that endogenously feedback the impacts of global climate change on the evolution of climate mitigation policies.
2. Materials and Methods
2.1. Sample and Dataset
We analyze the publicly available mixed-pool CCAM survey dataset [52] collected between 2008 and 2018 (N = 22,416) to identify the PSEM and test four alternate specifications of SEMs derived from PSEM. We chose the CCAM dataset since this survey systematically measured climate change beliefs and risk perceptions as well as climate policy support, which were key variables necessary for best understanding public support for climate policy.
We sampled 33 public opinion and sociodemographic questions from the CCAM survey dataset (Table 1). We selected the 14 items depicting public opinion statements from the dataset that had at least N = 17,000 responses across all years of available data, from 2008 to 2018. The 14 public opinion statements were originally designed by the survey team to measure a variety of constructs, including beliefs (items 1–3), risk perceptions (items 4–10), policy support (items 11–13), and behaviors (item 14) related to climate change [52]. We also selected 19 sociodemographic items that assessed the survey respondents’ gender, age, race, political ideology, political party, household factors, as well as the region they live in the U.S. and the year of the survey. Table 1 provides variable name, survey question, response options, and descriptive statistics for these 33 survey items. Tables S1 and S2 in Supporting Information (SI) show frequencies of responses for these 33 survey items.
Figure 1 shows the observed distribution of ideology plotted against the climate policy variable, measuring the level of support for regulating CO2 emissions. While very liberal respondents strongly support the regulation of CO2 emissions, very conservative respondents “somewhat oppose” the proposed climate policy. Somewhat liberal, somewhat conservative, and moderate respondents weakly support the proposed climate policy. Figure 2 shows that the respondents who are either affiliated with the Republican party or refuse to identify their political party or designate their party as “other” party are generally not very worried about global warming. The probability density functions have, however, very different shapes for these three types of respondents. In contrast, Figure 2 shows that the majority of Democrats, Independents, and respondents with no party affiliation are “somewhat worried” about global warming. Some respondents, however, are also very worried and not at all worried across all party affiliations. Figure 3 shows the distribution of support for regulating CO2 emissions over multiple waves of the survey from 2008 to 2018. Although there is some variability in the probability density functions for each level of support during this observation period, the sample median value remains at “somewhat support” level.
2.2. PSEM Machine Learning
Methodologically, Structural Equation Modeling (SEM) is a statistical technique for testing and estimating causal relations using a combination of statistical data and qualitative causal assumptions, a definition originally articulated by the geneticist Sewall Wright [55], refined by the economist Trygve Haavelmo [56] and the policy scientist Herbert Simon [57], and formally defined by Judea Pearl [58]. Structural Equation Models (SEM) allow both confirmatory and exploratory modeling, meaning they are suited for both theory testing and theory development. While Probabilistic Structural Equation Models (PSEMs) are conceptually similar to traditional SEMs, the PSEMs are based on a hierarchical Bayesian network structure as opposed to a series of equations. More specifically, PSEMs can be distinguished from SEMs in terms of key characteristics [28]: (i) All relationships in a PSEM are probabilistic—hence the name, as opposed to having deterministic relationships plus error terms in traditional SEMs. (ii) PSEMs are nonparametric, which facilitates the representation of nonlinear relationships, plus relationships between categorical variables. (iii) The structure of PSEMs is partially or fully machine-learned from data. In addition, we posit that (iv) the PSEMs can be used as exploratory tools to machine learn the relationship of manifest variables with latent variables. (v) PSEMs apply machine learning techniques to estimate best fit structural relationships among latent variables; and (vi) PSEMs apply machine learning to identify the optimal number of classes within each latent variable.
In this paper, we follow the PSEM procedure, as developed by [28] and implemented in BayesiaLab Version 10.2. The PSEM procedure involved four sequential estimation tasks, summarized in Table 2, and explained in more detail below.
Step 1: Estimation of latent variables through unsupervised hierarchical Bayesian network clustering of respondent beliefs.
Unsupervised learning is applied to discover the strongest relationships between the manifest variables, which are measured through direct survey questions (Table 1). Six unsupervised structural learning algorithms were applied to 14 manifest variables that measured respondent beliefs/opinions. SI Figures S1–S6 show the learned network structure, respectively, for the six algorithms and their associated Minimum Description Length (MDL) scores. As explained in [28], the MDL score is a two-component score, which has to be minimized to obtain the best solution. The MDL has been in use in the machine learning community for estimating the number of bits required for representing a “model” and “data given this model”. In this machine learning application, the “model” is a Bayesian network, consisting of a graph and probability tables. The second component is the log-likelihood of the data given the model, which is inversely proportional to the probability of the observations (data) given the Bayesian network (model). Following six unsupervised structural learning algorithms, available in Bayesia Lab Version 10.2, were applied and their associated MDL scores were measured: (1) Max spanning tree + taboo learning (final MDL score = 435,672.512); (2) Taboo learning (final MDL score = 435,622.442); (3) EQ + taboo learning (final MDL score = 435,598.507); (4) Taboo-EQ learning (final MDL score = 435,428.195); (5) SopLEQ + taboo learning (final MDL score = 435,672.508); (6) Taboo Order learning (final MDL score = 434,211.298). The Taboo Order learning network (Figure S6) with the lowest MDL score was selected for further analysis. The Taboo Order algorithm examines node order to assess the most parsimonious causal relationships between variables.
Next, variable clustering, based on the learned Bayesian network with the lowest MDL score, is applied to identify groups of variables that are strongly connected. In PSEM theory, the strong intracluster connections identified in the variable clustering step are ascribed to measure a “hidden common cause”. Finally, for each cluster of variables, a data clustering algorithm is applied to the variables within the cluster only to induce a latent variable that represents the hidden cause.
Step 2: Estimation of Bayesian network of latent variables that minimizes the description length.
Unsupervised learning on latent variables identified in Step 1 was implemented to identify the most likely relationships among the latent variables. At this stage, only Taboo and EQ algorithms can be tested. Both Taboo and EQ produced similar results, shown in the left portion of Figure 4, without the demographic variables. Figure S7 shows the sensitivity of the latent class model to the choice of maximum clustering size parameter equal to 5, 6, or 7. We chose the model with the maximum clustering size equal to 5 (Figure S7, panel a; Figure S8), which predicted four latent variables with relatively lower MDL scores compared with the maximum clustering size parametric values equal to 6 or 7.
Next, we reviewed the wording of the measured survey items included in each latent variable. We labeled each latent variable with a name that best matched the survey items’ CCAM intended constructs, as well as considering what theoretically united the manifest variables measured through the survey items. Then, we reviewed the predicted classes for each latent variable. We labeled each class based on the survey respondents’ common answers to the survey items included within the latent variable. Table S3 shows a comparison of survey items intended by CCAM constructs and PSEM-derived latent variables. Figures S10–S13 show marginal and conditional probability distributions of estimated classes within each latent variable. Section 3.2 provides a more detailed interpretation of the marginal and conditional probability distributions of estimated classes within each of the four measured latent variables.
Step 3: Linking latent variable PSEM with sociodemographic measured variables.
In this step, we applied the Taboo Order algorithm to generate the final PSEM shown in Figure 4. Figure S9 displays the node forces for the same final PSEM. Node force is the sum of all incoming and outgoing arc forces from a node. Arc force is computed by using the Kullback–Leibler Divergence (KLD), which compares two joint probability distributions, P and Q, defined on the same set of variables X. In this step, the latent variable conditional dependencies and causal relationships with respect to the socio-demographic measured variables were estimated.
Step 4: Calibration and k-fold validation of PSEM with target variable as policy support.
In k-fold validation, a dataset is divided into k folds, where in each fold, the data is randomly split into a training and testing set. The training set is used to fit the model, and the testing set is used to determine the goodness of model fit and avoid model overfitting. Table S4 shows model calibration and validation results after Step 2. This includes the estimation of the Bayesian network of latent variables that minimizes MDL. Table S5 shows model calibration results after Step 3, after linking the latent variable PSEM with sociodemographic measured variables.
This PSEM started off with an estimated R2 value of 92.9% on the training dataset. We used k-fold validation with k = 10 for 70%–30% split in training and test datasets. Table S5 shows that the final estimated PSEM has an R2 of 92.2%, a mean precision rate of 96.8%, mean reliability of 96.0%, mean ROC index of 99.8%, and mean calibration index of 78.8%. The k-fold validation shows slightly lower skill in predicting strong opposers at 93.8%, but similar skill for lukewarm supporters at 97.2% and for strong supporters at 97.2%.
2.3. Estimating a SEM Imputed from the PSEM
We estimated SEMs using structural equation modeling algorithms in STATA 17 for Windows (StataCorp LLC, College Station, TX, USA), specifically following SEM algorithms developed by [53,54]. In SEM#1, we applied standard maximum likelihood (ML method in STATA for SEM command), in which observations with missing values were excluded from the analysis and no sampling weights (weight_aggregate variable in CCAM dataset) were assigned to observations. SEM#2 is similar to SEM#1 except that sampling weights are assigned to the observations. In SEM#3, we applied the maximum likelihood with missing value (MLMV) method for SEM command, and no sampling weights were assigned to observations. SEM#4 is similar to SEM#3, except that sampling weights are assigned to the observations.
Tables S7–S10, respectively, show the following output from SEM#1 to SEM#4, including STATA command code, estimated standardized coefficients, SEM fitness statistics, and standardized direct, indirect, and total effects. The analysis involved standard linear SEM with maximum likelihood estimation procedure and an observed information matrix. The measurement model tested the adequacy of the measured independent variable survey items as indicators of the latent variables they were purported to measure, and the structural model examined relationships among the latent variables shown in Figure 2.
We fitted a SEM (Figure 2), whose measurement and latent variable structure were derived from the PSEM. The SEM was tested with directly measured variables, in which the error terms associated with the measured variables were left free to be estimated and also assumed to be uncorrelated with each other. The error terms of exogenous latent variables were tested for covariance-based on the recommendations from the analysis of modification indices of the initial model.
2.4. Goodness-of-Fit Statistics
Goodness-of-fit statistics were used to determine the fit of each estimated SEM to the sample data. Three approaches to measuring the goodness of fit were estimated: (1) population error, (2) baseline comparison, and (3) size of residuals. Table 3 provides model fitness statistics for four SEMs estimated with both MLMV and ML methods. Based on this analysis, especially the size of residuals measured through the coefficient of determination (CD) metric, both SEM#2 and SEM#4 appear to be reasonable fits of the underlying data. Therefore, SEM#4 was chosen due to its larger sample size. The SEM#2 coefficients are generally similar in magnitude and direction to SEM#4.
3. Results
3.1. Data-Driven Machine Learning Models Can Account for Complex Interactions Among Measured and Latent Variables to Explain Climate Policy Support
The machine-learned PSEM structure of the latent and measured variables (Figure 4) shows the complex pathways of interactions of risk perceptions and beliefs on support for climate policy while accounting for sociodemographic factors in the U.S. population. Structurally, the PSEM finds that beliefs and affective risk perception each have a direct influence on policy support, while the effect of analytical risk perception on policy support is only indirect. Among the sociodemographic factors, political ideology also has a direct effect on policy support, with political party and race each having an indirect effect through their impact on affective risk perception. All other sixteen sociodemographic variables also have an indirect effect on policy support, mediated through race, party, and ideology. Table S6 shows standardized effect sizes and their significance levels for all latent and manifest variables shown in the PSEM (Figure 4). All variables are ranked from highest to lowest total effect size. Further, all latent and manifest variables have a statistically significant total effect on policy support determined by G-test-derived p-values, except house_head signifying whether the respondent is head of the household. Generally, the magnitude of the total effect size for sociopsychological latent and manifest variables is relatively higher than sociodemographic variables.
The standard SEM derived from the PSEM-generated structure of latent and measured variables indicates that the SEM has a reasonable fit with the sampled data for the latent variables derived from the measured survey variables (Figure 5; SEM#4 was chosen as the best fit, see more details in Section 2 Table 3). Again, ideology, political party, and race also significantly influence policy support, either directly or indirectly.
Both PSEM and SEM models indicate the importance of the main three social psychological variables (beliefs, affective risk perception, and analytical risk perception) in estimating climate policy support (Table 4). Similar to past research, these models show that these factors are more important drivers than most sociodemographic variables [31,41]. A key finding from both the PSEM and SEM models is that analytical risk perception matters relatively less than the other two factors and does not have a direct effect on policy support. Instead, analytical risk perception is largely driven by beliefs and only indirectly affects policy support through its impact on affective risk perception.
When comparing the magnitude of the standardized total effect sizes, the PSEM and SEM are substantively different (Table 4 with more details in Tables S6 and S10). PSEM predicts that beliefs (42%) and affective risk perception (41%) have the largest standardized total effect sizes on support for climate policy, with analytical risk perception (36%) having a slightly smaller effect. In contrast, the SEM predicts affective risk perception is the most influential (53%), with beliefs mattering relatively less (35%), and analytical risk perception much less (11%). These are meaningful differences, with one model giving relatively equivalent weights to each of the social psychological factors and the other concluding that affective risk perception is a much stronger driver than analytical risk perception. Additionally, the PSEM standardized total effects are quite a bit larger for each of the sociodemographic variables than in the SEM, which leads to different conclusions about how much political ideology, political party, and race are relevant for estimating climate policy support. These differences arise due to underlying assumptions of a continuous linear scale in the standard SEM estimation of latent variables versus Bayesian probability distributions underlying the estimation of discrete “classes” in the PSEM latent variables (see Figures S10–S13) for a description of estimated classes in the latent variables).
3.2. Machine-Learned PSEM Enables Data-Driven Configuration of Measured Variables in Identification of Latent Variables and Their Class Sizes
Our analyses with both PSEM (Figure 4) and standard SEM (Figure 5) show how the individual survey items cluster into latent variables. The primary clusters somewhat align with the social psychological factors that have been theorized and empirically demonstrated to predict support for climate policy: affective risk perception, analytical risk perception, and beliefs about climate change [30,31,32]. However, the way the individual survey items cluster does not fully match the latent variable each was intended to measure.
For example, seven of the survey items were intended to measure “risk perceptions” by assessing how much the respondent perceives that climate change will harm a range of groups [52]. Three items that assess how much climate change will (1) harm themselves personally, (2) harm the US, and (3) harm developing countries all cluster together as expected, and we name this latent variable “analytical risk perception”. However, risk perception items assessing how much climate change will (1) harm future generations or (2) harm plants and animals, cluster together with the three other items intended to measure climate change “beliefs”, i.e., (3) that climate change is happening, (4) what climate change is caused by, and (5) the scientific consensus about climate change. This may suggest that future generations or plants and animals are viewed as more distant from the self and that their harm is more associated with the scientific facts related to climate change rather than their own perceived risk of its effects.
Another latent variable is “affective risk perception” which includes (1) worry about climate change as expected, but also contains items regarding (2) how soon they think climate change will harm the U.S. and (3) how frequently they discuss climate change with friends and family. This suggests that emotional reactions to climate change are associated with how soon its effects are expected to be felt, more so than the perception of impacts that may be severe but will not be felt until the future. It also suggests that interpersonal discussions about climate change are more frequently associated with emotional rather than analytical concerns.
3.3. Marginal and Conditional Probability Analysis of Policy Support Uncovers a Previously Unidentified Class of “Lukewarm Supporters”, Different from Strong Supporters and Opposers
Finally, by examining the response classes for the climate policy support latent variable, we can gain insights into which types of survey respondents tend to provide each level of policy support. When examining the marginal distributions, this PSEM predicts three response classes, with 27% of the U.S. population as “strong supporters” of climate policy action, while 59% are “lukewarm supporters” and 13% are “strong opposers” to climate policies (see Figure S10, panel a). Predicted posterior probabilities of opposers, lukewarm supporters, and strong supporters of climate policy are conditional upon beliefs, affective risk perception, analytical risk perception, ideology, party, and race, and are shown in Table 5 and Figure 6.
Strong supporters of climate policy action follow an expected pattern. They are much more likely to have alarmed beliefs, worried affective risk perception, high or moderate analytical risk perception, and a liberal political orientation. In contrast, strong opponents of climate policy are much more likely to report dismissive beliefs, perceive no affective risk or analytical risk, and have a conservative political orientation.
The conditional probability distributions of the large category of lukewarm policy supporters reveal a novel finding of this PSEM. Lukewarm supporters are more likely to have middle-range beliefs classified as concerned or cautious, but they are divided in their affective risk perception with approximately equal numbers of “not worried” and “worried”. Their analytical risk perception is also lower with more people who perceive little risk. Finally, from a political ideology standpoint, lukewarm supporters represent a relatively larger segment of moderates and “somewhat conservatives”. In sum, although they do lean towards supporting climate policy, their risk perceptions and beliefs are more moderate or lower than those who strongly support climate policies. This indicates that conceptualizing climate policy as simply having “supporters” and “opposers” may lead to inaccurate understanding and prediction since supporters represent a wide and heterogeneous swath of the population, and most of the policy supporters are less concerned about climate change than may be assumed.
Beliefs (measured with the 5-item responses shown in Figure 4) as a latent variable predicted six classes of marginal probability distribution in the US population. As shown in Table 5 and Figure S13 (panel a), the PSEM predicts that 46.50% of the US population are alarmed by human-caused climate change (strongest beliefs supporting human-caused climate change), while 19.41% are concerned. However, 11.37% are cautious, 7.04% disengaged, 5.21% doubtful, and 10.48% dismissive (strongest beliefs denying human-caused climate change). The response classes in the belief latent variable reveal expected findings. For example, the model defined six distinct classes within the beliefs latent variable that aligned quite well with the existing descriptions of The Climate Change Six Americas, [6,45] and we named our classes after them. It is important to note that the generative PSEM recovered “The Climate Change Six Americas” as a “Belief” state latent variable, which serves as a predictor of policy support. Table 5 and Figure S13 (panels b–f) show conditional probability distributions of strong supporters, strong opponents and lukewarm supporters for each of the six climate change belief states of the Six Americas: Notably, 77.95% of the policy supporters are alarmed while less than 5% of the policy supporters are dismissive, doubtful, disengaged, and cautious. Conversely, 40.78% of the policy opposers are dismissive, and only 11% are alarmed or concerned.
Affective risk perception (measured with three item responses shown in Figure 4) predicts only two classes in the US population: The PSEM predicts that 58.91% of the US population is worried about climate change, while the remaining 41.09% are not worried (Table 5, and also see Figure S11, panel a). Conditional probability distributions of policy supporters show that 86.14% are worried, and conversely, 81% of the policy opposers are not worried. Among the lukewarm supporters, 55% are worried and 45% not worried (Table 5 and Figure S11, panels b–c).
Analytical risk perceptions, a latent variable measured with three item responses, estimated five classes in the marginal probability distribution of the US population. As shown in Table 5, and Figure S12 (panel a), 22.11% of the US population perceived high risk from climate change; 30.46% perceived moderate risk; and 18.73% perceived little risk; while 15.94% perceived no risk from climate change; and 12.77% do not know about the climate change risk. From the analysis of conditional probability distributions (Table 5, and Figure S12 (panels b–f), we estimate that 37% of strong policy supporters perceive high risk and 38% perceive moderate risk from climate change. Conversely, we find that 47% of strong opponents of climate policy are risk deniers. Among the lukewarm supporters, 31% perceive moderate risk and 22% perceive little risk, and this class also includes 15% risk deniers and 13.5% do not knowers.
Further, marginal and conditional probability distributions for ideology, party, and race (Table 5) are consistent with findings of the previous literature [16]. Notably, 28% of the strong opposers have very conservative ideology and another 28% conservative ideology, while conversely, 40% of the strong supporters are moderate, and 27% are somewhat liberal. Lukewarm supporters are predicted to be 45% moderates and 24% somewhat conservative. We also find that 38% of the strong opposers are likely Republicans, 18% Democrats, and 20% Independents (Table 5). Among the strong supporters are 16% Republicans, 46% Democrats, and 23% Independents. Lukewarm policy supporters are likely 26% Republican, 32% Democrat, and 24% Independent.
4. Discussion
By utilizing machine learning to better understand how social psychological processes interact to generate support for climate policy, this model demonstrates how the various aspects of risk perception coalesce in the minds of Americans when considering climate change. The PSEM model identified latent variables that can be interpreted through a dual-processing lens [59,60,61,62] where both affective and analytical risk perceptions are identified as separate but related factors, with affective risk perception as the more proximal predictor of climate policy support. This further supports the importance of considering emotional processes in understanding how people perceive and respond to climate change [31], confirms “risk as feelings” hypothesis [47,48], and supports work finding that “worry” and “concern” about climate change are regularly some of the strongest predictors of policy support [32]. The primacy of emotional and affective risk has also been observed in various real-world risk-taking contexts, such as in finance [63] and health decision-making [64].
This PSEM also reported that beliefs have a large effect on policy support directly and also indirectly through its influence on analytical and affective risk perceptions. This is consistent with research showing that climate change belief is often predictive of policy support (see meta-analysis [65]). This may be partially due to political ideology feeding into beliefs or beliefs driving political ideology, and these political elements may be driving how beliefs shape the other risk perception processes. On the other hand, since analytical and affective risk perceptions are not directly related to political ideology, this offers some evidence that while beliefs are very polarized in the U.S., emotional concerns about climate change and estimates of the likelihood of climate change affecting humans may not be polarized to the same extent.
Additionally, the clustering of survey items revealed that affective risk perception included both emotional worry and the perception that climate change is already affecting the U.S. On the other hand, the latent variable we label “analytical risk perception” includes perceptions about how much the respondent will be impacted by climate change without a timeline for when these impacts will occur. This demonstrates that the sense of urgency about immediate risk aligns with emotional responses, whereas rating the likelihood of more distant impacts from their own set of risk perception. Past work on climate change risk perceptions has similarly theorized that worry represents a more emotional state that links more directly to adaptive behavioral responses such as policy support, while ratings of how likely it is for climate-related impacts to occur reflect a more cognitive component with less direct motivational influence [51].
Other work on psychological distance highlights how perceptions about the temporal and spatial distance of climate change can alter people’s risk perceptions and support for climate policies [66,67]. For example, personally experiencing climate change effects through extreme weather events leads to greater concern and support for climate policies (for review, see [51]), perhaps due to the psychological distance from climate change’s effects being reduced. Although we cannot determine whether those who scored higher in affective risk perception necessarily experienced climate change impacts directly (nor if those with lower affective risk perception had not), this may be a factor that differentiates individuals’ experiences and opinions.
Reviewing the response classes for each of the latent variables also revealed some new ways of conceptualizing these factors. For one, it demonstrates the value of moving beyond a linear or two-category partitioning of support for climate policy. Some work has suggested that a majority of the people support climate policy (e.g., see [30,68]), but this analysis indicates that the distinction is not between support and opposition. Our unsupervised machine learning model revealed a third category of “lukewarm supporters”. The supporters and opposers have clearer profiles in terms of their beliefs, risk perceptions, and political ideologies, while the lukewarm supporters contain respondents who vary rather significantly in these factors. They tend towards the middle range in beliefs and risk perceptions, but this group also contains a fair number of respondents on both sides of the extremes.
Indeed, while they are more likely to “somewhat support” these policies, they are also more likely to “somewhat oppose” climate policies than the other two opinion groups (see Figure S10, panel c). They are likely inconsistent in their policy support, which may vary depending on the details of the policy and how it is implemented. This may be the group of people most influenced by policy factors beyond concern about climate change itself, such as trust in those implementing the policy or perceptions of fairness that have been found to alter policy support in previous research [10,42].
Climate change is a global threat, but there are vast differences in the ways Americans perceive and respond to this risk. Much of the initial theorizing on risk perception and subsequent behavior choices, including voting for pro-environmental policies, assumed that these were calculated responses. Decision-making about supporting public policies addressing environmental risk was the result of cost-benefit analyses, estimating the relative probability and severity of a risk’s negative and positive outcomes, in which knowledge and logical calculations led to conclusions (see review [44]). However, research in the last few decades has demonstrated the importance of emotions and other affective processes in influencing risk perceptions [46,47]. These findings support the broader dual processing models of thinking and decision-making, in which evaluations from the emotionally-driven and intuitive “experiential” route and the analytically driven and deliberate “rational” route work together to lead to outcomes [59,60].
There are emotion-specific effects on risk perceptions. Anger leads to more optimistic appraisals and fear leads to more pessimism appraisals [69], and emotions associated with certainty (anger, happiness, disgust) lead to using heuristics in decision-making, while emotions associated with uncertainty (fear, hope, surprise) lead to more systematic decision-making [62]. Anticipation of emotions also alters risk perceptions, as anticipated regret increases the perception of risk and reduces risky behavior [61], while alternately anticipation of positive feelings leads to less risk aversion [70]. There is also a growing body of evidence that analytical reasoning is often less effective in people who have emotional dysfunction [71], and that intuitive processes are actually helpful in leading people to make decisions about complex problems that reflect their best interests [72], although this point is debated in the field [73]. However, research on emotional responses to risk has demonstrated the power of the “affect heuristic” in helping people make balanced decisions [74].
This research contributes to ongoing efforts to accelerate the transition to a sustainable planet by reducing the threat of climate change through effective, yet publicly supported, climate mitigation policies. This is especially timely given heightened societal awareness of and concern about connections between climate change, global security, and sustainable development goals. This work will provide specific insights into which psychological factors differ between those who perceive greater or lesser risk from climate change and those who do and do not support climate policy. It will give insights into where public outreach efforts will make the most difference.
This study is limited by the cross-sectional nature of the survey data fed into training and testing the machine-learned PSEM. While measured over time, CCAM survey data is not a panel dataset in which the same respondents are repeatedly measured over time. Future research efforts must focus on collecting panel data. More advanced, dynamic machine learning models, such as dynamic Bayesian Network Models, Latent Transition Analysis, Long Short Term Memory (LSTM), and vision transformer models, could be applied in future studies on panel datasets to model the impact of climate risk perceptions on support for climate policy. Another limitation of the dataset is its focus on the USA. Similar studies need to be conducted for other 200+ countries that are also implicated in producing GHG emissions. A third limitation is the design of the survey instruments. Alternate social psychological, behavioral, economic, political economy, and policy design theories need to be systematically incorporated in designing data collection protocols, which in turn will likely generate more robust and even foundational machine learning models. A fourth limitation is the measurement error that comes with survey instruments. Quantification of theoretical constructs on ordinal or cardinal scales may induce biases, which in turn decrease both internal and external validity of the conclusions derived from survey data-derived models. The recent advancements in the field of generative Artifiical Intelligence, e.g., Large Language Models (LLMs) derived from vision transformer algorithms, may provide more sophisticated and nuanced approaches to theorize and test the measurement of climate risk perceptions and public support for climate policies.
5. Conclusions
In conclusion, the PSEM approach allows the construction of a model to explain support for climate policy without a priori assumptions about the relationships between survey items representing social and psychological factors. Our results show the model is generally consistent with some aspects of theory while creating novel insights. We find that the survey items frequently cluster in ways that align with the targeted social psychological factors, but in other cases, certain risk perception items align more with beliefs than they do with other affective and analytical risk perceptions. Additionally, we find that belief and affective risk perception each have a larger and more direct influence on climate policy support than analytical risk perception, highlighting the importance of both scientific understanding and emotional responses to the climate crisis. We also find that quite a large portion of the population includes lukewarm supporters of climate policy, who differ meaningfully from strong supporters. This can indicate hope for the future because this group may be particularly persuadable and open to new climate policies if they are proposed in ways that resonate with them. The machine-learned PSEM presented in this study can be tested as a feedback loop in next-generation GCMs and IAMs to represent the impact of global climate change-induced variability in climate risk perceptions and their dynamic, coevolutionary impacts on the emergence of public support for climate mitigation policies.
Conceptualization, A.Z., K.L., N.H.F., L.J.G. and B.B.; methodology, A.Z., K.L. and N.H.F.; software, A.Z.; validation, A.Z., K.L., N.H.F., L.J.G. and B.B.; formal analysis, A.Z., K.L. and N.H.F.; investigation, A.Z., K.L., N.H.F., L.J.G. and B.B.; data curation, A.Z.; writing—original draft preparation, A.Z. and K.L.; writing—review and editing, A.Z., K.L., N.H.F., L.J.G. and B.B.; visualization, A.Z.; supervision, A.Z., K.L., N.H.F., L.J.G. and B.B.; project administration, A.Z., K.L., N.H.F., L.J.G. and B.B.; funding acquisition, A.Z., K.L., N.H.F., L.J.G. and B.B. All authors have read and agreed to the published version of the manuscript.
Not applicable.
Not applicable.
Publicly available dataset used in this study [
The authors declare no conflicts of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1. The viola plot shows ideology on x-axis and the level of support for regulating CO2 on y-axis. In this viola plot, the white dot is a marker for the median, the thick line shows the interquartile range with whiskers extending to the upper and lower adjacent values. This is overlaid with a density of the data.
Figure 2. The viola plot shows political party affiliation on x-axis and respondent level of worry about global warming on y-axis. In this viola plot, the white dot is a marker for the median, the thick line shows the interquartile range with whiskers extending to the upper and lower adjacent values. This is overlaid with a density of the data.
Figure 3. The viola plot shows observation year on x-axis and respondent level of support for regulating CO2 on y-axis. In this viola plot, the white dot is a marker for the median, the thick line shows the interquartile range with whiskers extending to the upper and lower adjacent values. This is overlaid with a density of the data.
Figure 4. Machine-learned structure of the PSEM (R2 = 92.2%). Nodes are scaled to represent the standardized total effect of all latent variables (red outlined nodes) and measured variables (no outline nodes) on policy support (the target node). The total effect is estimated as the derivative of the target node with respect to the driver node. The standardized total effect represents the total effect multiplied by the ratio of the standard deviation of the driver node and the standard deviation of the target node (see [27,28]). The width of line links between nodes shows the strength of Symmetric Relative Mutual Information (SRMI) among each variable in the PSEM. Node names for survey items are explained in Table 1.
Figure 5. Standardized direct effect coefficients estimated for SEM #4 structure learned from PSEM are shown for all measured and latent variables. *** indicate significances at p [less than] 0.001. Results of the SEM estimated by applying Maximum Likelihood with Missing Value (MLMV) algorithm in STATA are presented in Table S10. Model fitness scores and decomposition of direct, indirect, and total effect sizes and their relative statistical significance are also shown.
Figure 6. PSEM Posterior Mean Analysis: Normalized Mean Values Conditionality to Policy Support. Figure displays how those who are strong supporters, lukewarm supporters, and strong opposers of climate policy differ in their responses for each measured survey item and derived latent variable. The “prior” (red line) represents the normalized means for the whole sample. Means for strong opponents are shown in green, lukewarm supporters are in blue, and strong supporters are in pink. Node names for survey items are explained in Table 1.
Variable name, survey question, response options, and descriptive statistics of the survey sample. Survey methods, codebook, and data tables are available in a public repository [
| Variable Name | Survey Question | Response Options | Obs | Mean | S.D. |
|---|---|---|---|---|---|
| Public Opinion Statements | |||||
| 1. happening | Recently, you may have noticed that global warming has been getting some attention in the news. Global warming refers to the idea that the world’s average temperature has been increasing over the past 150 years, may be increasing more in the future, and that the world’s climate may change as a result. What do you think: Do you think that global warming is happening? | −1. Refused | 22,416 | 2.49 | 0.78 |
| 2. cause_recoded | Assuming global warming is happening, do you think it is... (Recoded to include open ends) | −1. Refused | 22,416 | 4.97 | 1.21 |
| 3. sci_consensus | Which comes closest to your own view? | −1. Refused | 21,086 | 2.74 | 1.21 |
| 4. worry | How worried are you about global warming? | −1. Refused | 22,416 | 2.53 | 0.97 |
| 5. harm_personally | How much do you think global warming will harm: You personally | −1. Refused | 22,416 | 1.99 | 1.20 |
| 6. harm_US | How much do you think global warming will harm: People in the United States | −1. Refused | 22,416 | 2.35 | 1.32 |
| 7. harm_dev_countries | How much do you think global warming will harm: People in developing countries | −1. Refused | 22,416 | 2.47 | 1.43 |
| 8. harm_future_gen | How much do you think global warming will harm: Future generations of people | −1. Refused | 22,416 | 2.75 | 1.46 |
| 9. harm_plants_animals | How much do you think global warming will harm: Plant and animal species | −1. Refused | 21,086 | 2.75 | 1.43 |
| 10. when_harm_US | When do you think global warming will start to harm people in the United States? | −1. Refused | 22,416 | 3.87 | 1.96 |
| 11. reg_CO2_pollutant | How much do you support or oppose the following policies? | −1. Refused | 21,406 | 2.84 | 1.09 |
| 12. reg_utilities | How much do you support or oppose the following policies? | −1. Refused | 17,390 | 2.61 | 1.16 |
| 13. fund_research | How much do you support or oppose the following policies? | −1. Refused | 22,416 | 3.09 | 1.06 |
| 14. discuss_GW | How often do you discuss global warming with your family and friends? | −1. Refused | 22,416 | 2.11 | 0.89 |
| Sociodemographic variables | |||||
| 1. gender | Are you…? | 1. Male | 22,416 | 1.51 | 0.49 |
| 2. age_category | How old are you? [recoded] | 1. 18–34 years | 22,416 | 2.23 | 0.78 |
| 3. educ_category | What is the highest level of school you have completed? [recoded] | 1. Less than high school | 22,416 | 2.90 | 0.96 |
| 4. income_category | Responses to “income” were categorized into | 1. Less than $50,000 | 22,416 | 1.87 | 0.80 |
| 5. race | Responses to “race” were categorized into the following four groups. | 1. White, non-Hispanic | 22,416 | 1.51 | 0.98 |
| 6. ideology | In general, do you think of yourself as... | −1. Refused | 22,416 | 3.04 | 1.20 |
| 7. party | Generally speaking, do you think of yourself as a... | −1. Refused | 22,416 | 2.32 | 1.26 |
| 8. registered_voter | Are you currently registered to vote, or not | −1. Refused | 22,416 | 1.24 | 0.82 |
| 9. region9 | Computed based on state of residence | 1. New England | 22,416 | 6.06 | 5.25 |
| 10. religion | What is your religion? | −1. Refused | 22,416 | 6.06 | 5.25 |
| 11. evangelical | Would you describe yourself as “born-again” or evangelical? | −1. Refused | 22,416 | 1.79 | 0.64 |
| 12. service_attendance | How often do you attend religious services? | −1. Refused | 22,416 | 3.08 | 1.80 |
| 13. marit_status | Are you now…? | 1. Married | 22,416 | 2.36 | 1.80 |
| 14. employment | Do any of the following currently describe you? | 1. Working—as a paid employee | 22,416 | 2.93 | 2.18 |
| 15. house_head | Respondents were asked “Is your residence in…” with response options “Your name only”, “Your name with someone else’s name (jointly owned or rented)”, or “Someone else’s name only”. Respondents who said “Someone else’s name only” were coded as 0 = “Not head of household;” the other two responses were coded as 1 = “Head of household” | 1. Not head of household | 22,416 | 1.83 | 0.38 |
| 16. house_size | How many people live in your household [recoded] | Open ended | 22,416 | 2.67 | 1.47 |
| 17. house_type | Which best describes the building where you live? | 1. One-family house detached from any other house | 22,416 | 1.53 | 0.92 |
| 18. house_own | Are your living quarters… | 1. Owned by you or someone in your household | 22,416 | 1.27 | 0.49 |
| 19. year | Year of survey data collection | 1. 2008, | 22,416 | 5.72 | 2.88 |
| weight_wave | Sampling weight specific to each wave | -- | 22,416 | 0.99 | 0.66 |
| weight_aggregate | Sampling weight if aggregating multiple waves | -- | 22,416 | 0.99 | 0.71 |
Four sequential estimation tasks that were implemented for estimating PSEM.
| Step Number | Description |
|---|---|
| Step 1 | Estimation of latent variables through unsupervised hierarchical Bayesian network clustering of respondent beliefs. |
| Step 2 | Estimation of Bayesian network of latent variables that minimizes the description length. |
| Step 3 | Linking latent variable PSEM with sociodemographic measured variables. |
| Step 4 | Calibration and k-fold validation of PSEM with target variable as policy support. |
Goodness-of-fit (GoF) statistics estimated for four alternate specifications of the standard SEM.
| Recommended Goodness of the Fit (GoF) Value [ | SEM#1 | SEM#2 | SEM#3 | SEM#4 | |
|---|---|---|---|---|---|
| Sample Size | 16,380 | 16,380 | 22,416 | 22,416 | |
| (1) Population error [Root Mean Squared Error of Approximation, RMSEA] | Less than 0.1 | 0.08 | RMSEA not reported due to model fit with vce (robust) | 0.08 | RMSEA not reported due to model fit with vce (robust) |
| (2A) Baseline comparison [Comparative Fit Index, CFI] | Closer to 1 | 0.92 | CFI not reported due to model adding sampling weights | 0.92 | CFI not reported due to model adding sampling weights |
| (2B) Baseline comparison [Tucker Lewis Index, TLI] | Closer to 1 | 0.90 | TLI not reported due to model adding sampling weights | 0.90 | TLI not reported due to model adding sampling weights |
| (3) Size of residuals [Standardized Root Mean Squared Residual, SRMR] | Less than 0.08 | 0.06 | 0.06 | SRMR is not reported due to missing value treatment. | SRMR is not reported due to missing value treatment. |
| (4) Size of residuals [Coefficient of determination, CD] | Less than 0.08 | 0.10 | 0.07 | 0.10 | 0.08 |
Estimated standardized total effect sizes predicting policy support. In parentheses, G-test scores are reported for PSEM and z-test scores for SEM. All estimated effects are significant at p < 0.001. This is SEM#4 from
| PSEM | SEM | |
|---|---|---|
| Affective Risk Perception | 0.41 | 0.53 |
| Analytical Risk Perception | 0.36 | 0.11 |
| Beliefs | 0.42 | 0.35 |
| Ideology | −0.19 | −0.05 |
| Party | 0.06 | 0.02 |
| Race | 0.05 | 0.02 |
PSEM predicted marginal (a priori) and conditional (posterior) probabilities of strong opposers, lukewarm supporters, and strong supporters of climate policy conditional upon beliefs, affective risk perception, analytical risk perception, ideology, political party, and race.
| Variables and Their Categorical Class Values | Marginal (a Priori) Probability (%) | Conditional (Posterior) Probability for Strong Opposers (%) | Conditional (Posterior) Probability for | Conditional (Posterior) Probability for |
|---|---|---|---|---|
| Policy Support | ||||
| 13.15 | 100.00 | ||
| 59.46 | 100.00 | ||
| 27.38 | 100.00 | ||
| Beliefs | ||||
| 10.48 | 40.78 | 7.92 | 1.49 |
| 5.21 | 8.22 | 4.88 | 4.46 |
| 7.04 | 12.79 | 8.01 | 2.19 |
| 11.37 | 15.99 | 14.25 | 2.89 |
| 19.41 | 11.03 | 25.12 | 11.02 |
| 46.50 | 11.19 | 39.82 | 77.95 |
| Affective Risk Perception | ||||
| 41.09 | 81.04 | 44.79 | 13.86 |
| 58.91 | 18.96 | 55.21 | 86.14 |
| Analytical Risk Perception | ||||
| 12.77 | 19.70 | 13.54 | 7.77 |
| 15.94 | 47.13 | 14.71 | 3.62 |
| 18.73 | 15.95 | 22.06 | 12.82 |
| 30.46 | 11.63 | 30.92 | 38.51 |
| 22.11 | 5.59 | 18.77 | 37.28 |
| Ideology | ||||
| 2.54 | 9.94 | 1.66 | 0.92 |
| 7.28 | 3.36 | 4.28 | 15.67 |
| 17.78 | 5.95 | 16.02 | 27.30 |
| 41.14 | 25.03 | 45.37 | 39.69 |
| 21.03 | 27.87 | 23.79 | 11.76 |
| 10.22 | 27.86 | 8.89 | 4.66 |
| Party | ||||
| 1.34 | 4.63 | 0.96 | 0.57 |
| 24.56 | 38.23 | 25.61 | 15.72 |
| 34.13 | 18.22 | 32.15 | 46.07 |
| 23.35 | 20.41 | 24.13 | 23.05 |
| 2.48 | 3.86 | 2.42 | 1.94 |
| 14.15 | 14.66 | 14.73 | 12.65 |
| Race | ||||
| 66.05 | 71.81 | 66.48 | 62.36 |
| 11.72 | 9.73 | 11.65 | 12.83 |
| 7.43 | 6.19 | 7.32 | 8.25 |
| 14.80 | 12.26 | 14.56 | 16.55 |
Supplementary Materials
The following supporting information can be downloaded at:
References
1. Zia, A. Post-Kyoto Climate Governance: Confronting the Politics of Scale, Ideology and Knowledge; Routledge: London, UK, 2013.
2. Beckage, B.; Gross, L.J.; Lacasse, K.; Carr, E.; Metcalf, S.S.; Winter, J.M.; Howe, P.D.; Fefferman, N.; Franck, T.; Zia, A. Linking models of human behaviour and climate alters projected climate change. Nat. Clim. Chang.; 2018; 8, pp. 79-84. [DOI: https://dx.doi.org/10.1038/s41558-017-0031-7]
3. Beckage, B.; Lacasse, K.; Winter, J.M.; Gross, L.J.; Fefferman, N.; Hoffman, F.M.; Metcalf, S.S.; Franck, T.; Carr, E.; Zia, A. The Earth has humans, so why don’t our climate models?. Clim. Chang.; 2020; 163, pp. 181-188. [DOI: https://dx.doi.org/10.1007/s10584-020-02897-x]
4. Zia, A. Synergies and Trade-Offs between Climate Change Adaptation and Mitigation across Multiple Scales of Governance. Adaptiveness: Changing Earth System Governance; Siebenhuener, B.; Djalante, R. Cambridge University Press: Cambridge, UK, 2021.
5. Rising, J.A.; Taylor, C.; Ives, M.C.; Ward, R.E. Challenges and innovations in the economic evaluation of the risks of climate change. Ecol. Econ.; 2022; 197, 107437. [DOI: https://dx.doi.org/10.1016/j.ecolecon.2022.107437]
6. Wilson, C.; Guivarch, C.; Kriegler, E.; Van Ruijven, B.; Van Vuuren, D.P.; Krey, V.; Schwanitz, V.J.; Thompson, E.L. Evaluating process-based integrated assessment models of climate change mitigation. Clim. Chang.; 2021; 166, pp. 1-22. [DOI: https://dx.doi.org/10.1007/s10584-021-03099-9]
7. Van Beek, L.; Oomen, J.; Hajer, M.; Pelzer, P.; van Vuuren, D. Navigating the political: An analysis of political calibration of integrated assessment modelling in light of the 1.5 C goal. Environ. Sci. Policy; 2022; 133, pp. 193-202. [DOI: https://dx.doi.org/10.1016/j.envsci.2022.03.024]
8. Burstein, P. The impact of public opinion on public policy: A review and an agenda. Political Res. Q.; 2003; 56, pp. 29-40. [DOI: https://dx.doi.org/10.1177/106591290305600103]
9. Shapiro, R.Y. Public opinion and American democracy. Public Opin. Q.; 2011; 75, pp. 982-1017. [DOI: https://dx.doi.org/10.1093/poq/nfr053]
10. Drews, S.; Van den Bergh, J.C. What explains public support for climate policies? A review of empirical and experimental studies. Clim. Policy; 2016; 16, pp. 855-876. [DOI: https://dx.doi.org/10.1080/14693062.2015.1058240]
11. Attari, S.Z.; Schoen, M.; Davidson, C.I.; DeKay, M.L.; de Bruin, W.B.; Dawes, R.; Small, M.J. Preferences for change: Do individuals prefer voluntary actions, soft regulations, or hard regulations to decrease fossil fuel consumption?. Ecol. Econ.; 2009; 68, pp. 1701-1710. [DOI: https://dx.doi.org/10.1016/j.ecolecon.2008.10.007]
12. Dietz, T.; Dan, A.; Shwom, R. Support for climate change policy: Social psychological and social structural influences. Rural Sociol.; 2007; 72, pp. 185-214. [DOI: https://dx.doi.org/10.1526/003601107781170026]
13. Leiserowitz, A. Climate change risk perception and policy preferences: The role of affect, imagery, and values. Clim. Chang.; 2006; 77, pp. 45-72. [DOI: https://dx.doi.org/10.1007/s10584-006-9059-9]
14. McCright, A.M.; Dunlap, R.E.; Xiao, C. Perceived scientific agreement and support for government action on climate change in the USA. Clim. Chang.; 2013; 119, pp. 511-518. [DOI: https://dx.doi.org/10.1007/s10584-013-0704-9]
15. Steg, L.; Dreijerink, L.; Abrahamse, W. Factors influencing the acceptability of energy policies: A test of VBN theory. J. Environ. Psychol.; 2005; 25, pp. 415-425. [DOI: https://dx.doi.org/10.1016/j.jenvp.2005.08.003]
16. Zia, A.; Todd, A.M. Evaluating the effects of ideology on public understanding of climate change science: How to improve communication across ideological divides?. Public Underst. Sci.; 2010; 19, pp. 743-761. [DOI: https://dx.doi.org/10.1177/0963662509357871]
17. Bostrom, A.; O’Connor, R.E.; Böhm, G.; Hanss, D.; Bodi, O.; Ekström, F.; Halder, P.; Jeschke, S.; Mack, B.; Qu, M. Causal thinking and support for climate change policies: International survey findings. Glob. Environ. Chang.; 2012; 22, pp. 210-222. [DOI: https://dx.doi.org/10.1016/j.gloenvcha.2011.09.012]
18. Patt, A.G.; Weber, E.U. Perceptions and communication strategies for the many uncertainties relevant for climate policy. Wiley Interdiscip. Rev. Clim. Chang.; 2014; 5, pp. 219-232. [DOI: https://dx.doi.org/10.1002/wcc.259]
19. Steg, L.; Dreijerink, L.; Abrahamse, W. Why are energy policies acceptable and effective?. Environ. Behav.; 2006; 38, pp. 92-111. [DOI: https://dx.doi.org/10.1177/0013916505278519]
20. Adaman, F.; Karalı, N.; Kumbaroğlu, G.; Or, İ.; Özkaynak, B.; Zenginobuz, Ü. What determines urban households’ willingness to pay for CO2 emission reductions in Turkey: A contingent valuation survey. Energy Policy; 2011; 39, pp. 689-698. [DOI: https://dx.doi.org/10.1016/j.enpol.2010.10.042]
21. Franzen, A.; Vogl, D. Two decades of measuring environmental attitudes: A comparative analysis of 33 countries. Glob. Environ. Chang.; 2013; 23, pp. 1001-1008. [DOI: https://dx.doi.org/10.1016/j.gloenvcha.2013.03.009]
22. O‘Connor, R.E.; Bard, R.J.; Fisher, A. Risk perceptions, general environmental beliefs, and willingness to address climate change. Risk Anal.; 1999; 19, pp. 461-471. [DOI: https://dx.doi.org/10.1111/j.1539-6924.1999.tb00421.x]
23. Owen, A.L.; Conover, E.; Videras, J.; Wu, S. Heat waves, droughts, and preferences for environmental policy. J. Policy Anal. Manag.; 2012; 31, pp. 556-577. [DOI: https://dx.doi.org/10.1002/pam.21599]
24. Petrovic, N.; Madrigano, J.; Zaval, L. Motivating mitigation: When health matters more than climate change. Clim. Chang.; 2014; 126, pp. 245-254. [DOI: https://dx.doi.org/10.1007/s10584-014-1192-2]
25. Marcot, B.G.; Penman, T.D. Advances in Bayesian network modelling: Integration of modelling technologies. Environ. Model. Softw.; 2019; 111, pp. 386-393. [DOI: https://dx.doi.org/10.1016/j.envsoft.2018.09.016]
26. Pearl, J. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference; Elsevier: Amsterdam, The Netherlands, 2014.
27. Kullback, S. Information Theory and Statistics; Courier Corporation: North Chelmsford, MA, USA, 1997.
28. Conrady, S.; Jouffe, L. Bayesian Networks and BayesiaLab: A Practical Introduction for Researchers; Bayesia USA: Nashville, TN, USA, 2015.
29. Bouman, T.; Verschoor, M.; Albers, C.J.; Böhm, G.; Fisher, S.D.; Poortinga, W.; Whitmarsh, L.; Steg, L. When worry about climate change leads to climate action: How values, worry and personal responsibility relate to various climate actions. Glob. Environ. Chang.; 2020; 62, 102061. [DOI: https://dx.doi.org/10.1016/j.gloenvcha.2020.102061]
30. Ding, D.; Maibach, E.W.; Zhao, X.; Roser-Renouf, C.; Leiserowitz, A. Support for climate policy and societal action are linked to perceptions about scientific agreement. Nat. Clim. Chang.; 2011; 1, pp. 462-466. [DOI: https://dx.doi.org/10.1038/nclimate1295]
31. Goldberg, M.H.; Gustafson, A.; Ballew, M.T.; Rosenthal, S.A.; Leiserowitz, A. Identifying the most important predictors of support for climate policy in the United States. Behav. Public Policy; 2021; 5, pp. 480-502. [DOI: https://dx.doi.org/10.1017/bpp.2020.39]
32. Smith, N.; Leiserowitz, A. The role of emotion in global warming policy support and opposition. Risk Anal.; 2014; 34, pp. 937-948. [DOI: https://dx.doi.org/10.1111/risa.12140]
33. Marquart-Pyatt, S.T.; Qian, H.; Houser, M.K.; McCright, A.M. Climate change views, energy policy preferences, and intended actions across welfare state regimes: Evidence from the European Social Survey. Int. J. Sociol.; 2019; 49, pp. 1-26. [DOI: https://dx.doi.org/10.1080/00207659.2018.1560979]
34. Bamberg, S.; Möser, G. Twenty years after Hines, Hungerford, and Tomera: A new meta-analysis of psycho-social determinants of pro-environmental behaviour. J. Environ. Psychol.; 2007; 27, pp. 14-25. [DOI: https://dx.doi.org/10.1016/j.jenvp.2006.12.002]
35. Klöckner, C.A. A comprehensive model of the psychology of environmental behaviour—A meta-analysis. Glob. Environ. Chang.; 2013; 23, pp. 1028-1038. [DOI: https://dx.doi.org/10.1016/j.gloenvcha.2013.05.014]
36. Barber, D. Bayesian Reasoning and Machine Learning; Cambridge University Press: Cambridge, UK, 2012.
37. Binder, J.; Koller, D.; Russell, S.; Kanazawa, K. Adaptive probabilistic networks with hidden variables. Mach. Learn.; 1997; 29, pp. 213-244. [DOI: https://dx.doi.org/10.1023/A:1007421730016]
38. Cui, G.; Wong, M.L.; Lui, H.-K. Machine learning for direct marketing response models: Bayesian networks with evolutionary programming. Manag. Sci.; 2006; 52, pp. 597-612. [DOI: https://dx.doi.org/10.1287/mnsc.1060.0514]
39. Frey, B.J.; Brendan, J.F.; Frey, B.J. Graphical Models for Machine Learning and Digital Communication; MIT Press: Cambridge, MA, USA, 1998.
40. Kallbekken, S. Research on public support for climate policy instruments must broaden its scope. Nat. Clim. Chang.; 2023; 13, pp. 206-208. [DOI: https://dx.doi.org/10.1038/s41558-022-01593-1]
41. Hasanaj, V.; Stadelmann-Steffen, I. Is the problem or the solution riskier? Predictors of carbon tax policy support. Environ. Res. Commun.; 2022; 4, 105001. [DOI: https://dx.doi.org/10.1088/2515-7620/ac9516]
42. Levi, S. Why hate carbon taxes? Machine learning evidence on the roles of personal responsibility, trust, revenue recycling, and other factors across 23 European countries. Energy Res. Soc. Sci.; 2021; 73, 101883. [DOI: https://dx.doi.org/10.1016/j.erss.2020.101883]
43. Povitkina, M.; Jagers, S.C.; Matti, S.; Martinsson, J. Why are carbon taxes unfair? Disentangling public perceptions of fairness. Glob. Environ. Chang.; 2021; 70, 102356. [DOI: https://dx.doi.org/10.1016/j.gloenvcha.2021.102356]
44. Yates, J. Risk-Taking Behavior; John Wiley & Sons: Hoboken, NJ, USA, 1992.
45. Visschers, V.; Wiedemann, P.M.; Gutscher, H.; Kurzenhäuser, S.; Seidl, R.; Jardine, C.; Timmermans, D. Affect-inducing risk communication: Current knowledge and future directions. J. Risk Res.; 2012; 15, pp. 257-271. [DOI: https://dx.doi.org/10.1080/13669877.2011.634521]
46. Slovic, P.; Finucane, M.L.; Peters, E.; MacGregor, D.G. The affect heuristic. Eur. J. Oper. Res.; 2007; 177, pp. 1333-1352. [DOI: https://dx.doi.org/10.1016/j.ejor.2005.04.006]
47. Loewenstein, G.F.; Weber, E.U.; Hsee, C.K.; Welch, N. Risk as feelings. Psychol. Bull.; 2001; 127, 267. [DOI: https://dx.doi.org/10.1037/0033-2909.127.2.267]
48. Weber, E.U. “Risk as feelings” and “perception matters”: Psychological contributions on risk, risk taking and risk management. Future Risk Risk Management; University of Pennsylvania Press: Philadelphia, PA, USA, 2018; pp. 30-47.
49. Kasperson, R.E.; Renn, O.; Slovic, P.; Brown, H.S.; Emel, J.; Goble, R.; Kasperson, J.X.; Ratick, S. The social amplification of risk: A conceptual framework. Risk Anal.; 1988; 8, pp. 177-187. [DOI: https://dx.doi.org/10.1111/j.1539-6924.1988.tb01168.x]
50. Van der Linden, S. The social-psychological determinants of climate change risk perceptions: Towards a comprehensive model. J. Environ. Psychol.; 2015; 41, pp. 112-124. [DOI: https://dx.doi.org/10.1016/j.jenvp.2014.11.012]
51. Van der Linden, S. Determinants and measurement of climate change risk perception, worry, and concern. The Oxford Encyclopedia of Climate Change Communication; Oxford University Press: Oxford, UK, 2017.
52. Ballew, M.T.; Leiserowitz, A.; Roser-Renouf, C.; Rosenthal, S.A.; Kotcher, J.E.; Marlon, J.R.; Lyon, E.; Goldberg, M.H.; Maibach, E.W. Climate change in the American mind: Data, tools, and trends. Environ. Sci. Policy Sustain. Dev.; 2019; 61, pp. 4-18. [DOI: https://dx.doi.org/10.1080/00139157.2019.1589300]
53. Acock, A. Discovering Structural Equation Modeling Using Stata; Stata Press: College Station, TX, USA, 2013; Volume 1.
54. Ullman, J.B.; Bentler, P.M. Structural equation modeling. Handbook of Psychology; 2nd ed. John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2012.
55. Wright, S. Correlation and causation. J. Agric. Res.; 1921; 20, 557.
56. Haavelmo, T. The statistical implications of a system of simultaneous equations. Econometrica; 1943; 11, pp. 1-12. [DOI: https://dx.doi.org/10.2307/1905714]
57. Simon, H.A. Notes on the observation and measurement of political power. J. Politics; 1953; 15, pp. 500-516. [DOI: https://dx.doi.org/10.2307/2126538]
58. Pearl, J. Causality; Cambridge University Press: Cambridge, UK, 2009.
59. Epstein, S. Integration of the cognitive and the psychodynamic unconscious. Am. Psychol.; 1994; 49, 709. [DOI: https://dx.doi.org/10.1037/0003-066X.49.8.709]
60. Kahneman, D. Thinking, Fast and Slow; Macmillan: New York, NY, USA, 2011.
61. Lagerkvist, C.J.; Okello, J.; Karanja, N. Consumers’ evaluation of volition, control, anticipated regret, and perceived food health risk. Evidence from a field experiment in a traditional vegetable market in Kenya. Food Control; 2015; 47, pp. 359-368. [DOI: https://dx.doi.org/10.1016/j.foodcont.2014.07.026]
62. Tiedens, L.Z.; Linton, S. Judgment under emotional certainty and uncertainty: The effects of specific emotions on information processing. J. Personal. Soc. Psychol.; 2001; 81, 973. [DOI: https://dx.doi.org/10.1037/0022-3514.81.6.973]
63. Weber, M.; Weber, E.U.; Nosić, A. Who takes risks when and why: Determinants of changes in investor risk taking. Rev. Financ.; 2013; 17, pp. 847-883. [DOI: https://dx.doi.org/10.1093/rof/rfs024]
64. Ferrer, R.A.; Klein, W.M. Risk perceptions and health behavior. Curr. Opin. Psychol.; 2015; 5, pp. 85-89. [DOI: https://dx.doi.org/10.1016/j.copsyc.2015.03.012]
65. Hornsey, M.J.; Harris, E.A.; Bain, P.G.; Fielding, K.S. Meta-analyses of the determinants and outcomes of belief in climate change. Nat. Clim. Chang.; 2016; 6, pp. 622-626. [DOI: https://dx.doi.org/10.1038/nclimate2943]
66. Rickard, L.N.; Yang, Z.J.; Schuldt, J.P. Here and now, there and then: How “departure dates” influence climate change engagement. Glob. Environ. Chang.; 2016; 38, pp. 97-107. [DOI: https://dx.doi.org/10.1016/j.gloenvcha.2016.03.003]
67. Spence, A.; Poortinga, W.; Pidgeon, N. The psychological distance of climate change. Risk Anal. Int. J.; 2012; 32, pp. 957-972. [DOI: https://dx.doi.org/10.1111/j.1539-6924.2011.01695.x] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/21992607]
68. Stokes, B.; Eike, R.; Carle, J. Global Concern about Climate Change, Broad Support for Limiting Emissions. Pew Research Centers Global Attitudes Project. 2015; Available online: https://www.pewresearch.org/global/2015/11/05/global-concern-about-climate-change-broad-support-for-limiting-emissions/ (accessed on 17 August 2024).
69. Lerner, J.S.; Keltner, D. Beyond valence: Toward a model of emotion-specific influences on judgement and choice. Cogn. Emot.; 2000; 14, pp. 473-493. [DOI: https://dx.doi.org/10.1080/026999300402763]
70. Mellers, B.A.; McGraw, A.P. Anticipated emotions as guides to choice. Curr. Dir. Psychol. Sci.; 2001; 10, pp. 210-214. [DOI: https://dx.doi.org/10.1111/1467-8721.00151]
71. Bechara, A.; Damasio, H.; Tranel, D.; Damasio, A.R. Deciding advantageously before knowing the advantageous strategy. Science; 1997; 275, pp. 1293-1295. [DOI: https://dx.doi.org/10.1126/science.275.5304.1293]
72. Strick, M.; Dijksterhuis, A.; Bos, M.W.; Sjoerdsma, A.; Van Baaren, R.B.; Nordgren, L.F. A meta-analysis on unconscious thought effects. Soc. Cogn.; 2011; 29, pp. 738-762. [DOI: https://dx.doi.org/10.1521/soco.2011.29.6.738]
73. Acker, F. New findings on unconscious versus conscious thought in decision making: Additional empirical data and meta-analysis. Judgm. Decis. Mak.; 2008; 3, pp. 292-303. [DOI: https://dx.doi.org/10.1017/S1930297500000863]
74. Slovic, P. What’s fear got to do with it-It’s affect we need to worry about. Mo. L. Rev.; 2004; 69, 971.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
While a flurry of studies and Integrated Assessment Models (IAMs) have independently investigated the impacts of switching mitigation policies in response to different climate scenarios, little is understood about the feedback effect of how human risk perceptions of climate change could contribute to switching climate mitigation policies. This study presents a novel machine learning approach, utilizing a probabilistic structural equation model (PSEM), for understanding complex interactions among climate risk perceptions, beliefs about climate science, political ideology, demographic factors, and their combined effects on support for mitigation policies. We use machine learning-based PSEM to identify the latent variables and quantify their complex interaction effects on support for climate policy. As opposed to a priori clustering of manifest variables into latent variables that is implemented in traditional SEMs, the novel PSEM presented in this study uses unsupervised algorithms to identify data-driven clustering of manifest variables into latent variables. Further, information theoretic metrics are used to estimate both the structural relationships among latent variables and the optimal number of classes within each latent variable. The PSEM yields an R2 of 92.2% derived from the “Climate Change in the American Mind” dataset (2008–2018 [N = 22,416]), which is a substantial improvement over a traditional regression analysis-based study applied to the CCAM dataset that identified five manifest variables to account for 51% of the variance in policy support. The PSEM uncovers a previously unidentified class of “lukewarm supporters” (~59% of the US population), different from strong supporters (27%) and opposers (13%). These lukewarm supporters represent a wide swath of the US population, but their support may be capricious and sensitive to the details of the policy and how it is implemented. Individual survey items clustered into latent variables reveal that the public does not respond to “climate risk perceptions” as a single construct in their minds. Instead, PSEM path analysis supports dual processing theory: analytical and affective (emotional) risk perceptions are identified as separate, unique factors, which, along with climate beliefs, political ideology, and race, explain much of the variability in the American public’s support for climate policy. The machine learning approach demonstrates that complex interaction effects of belief states combined with analytical and affective risk perceptions; as well as political ideology, party, and race, will need to be considered for informing the design of feedback loops in IAMs that endogenously feedback the impacts of global climate change on the evolution of climate mitigation policies.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
; Lacasse, Katherine 2
; Fefferman, Nina H 3
; Gross, Louis J 4
; Beckage, Brian 5
1 Department of Community Development and Applied Economics, University of Vermont, Burlington, VT 05405, USA; Department of Computer Science, University of Vermont, Burlington, VT 05405, USA;
2 Department of Psychology, Rhode Island College, Providence, RI 02908, USA;
3 Department of Ecology and Evolutionary Biology, University of Tennessee, Knoxville, TN 37996, USA
4 Department of Ecology and Evolutionary Biology, University of Tennessee, Knoxville, TN 37996, USA
5 Department of Computer Science, University of Vermont, Burlington, VT 05405, USA;




