What are the cognitive, social, and behavioral processes underlying climate awareness and action?

CliCoLab

The Climate Cognition Lab is a psychology research lab at Stanford University Doerr School of Sustainability. We are interested in the mechanisms by which individual-level social cognition gives rise to emergent cognitive phenomena such as collective beliefs, collective decision-making, and collective action. Our work focuses on designing, testing, and implementing interventions aimed at stimulating social change and improving societal welfare. Guided by a theoretical framework of investigation and striving to achieve ecological validity, we employ a large array of methods including behavioral laboratory experiments, field studies, randomized controlled trials, international many lab collaborations, agent based modeling, and social network analysis. Our lab is situated at the intersection of basic and applied science, incorporates an interdisciplinary perspective, and directly informs policy relevant to current societal issues, such as the climate crisis.

Research

  • What is the most effective behavioral intervention for stimulating climate action?

    Effectively reducing climate change requires dramatic, global behavior change. Yet it is unclear which strategies are most likely to motivate people to change their climate beliefs and behaviors. Here, we tested 11 expert-crowdsourced interventions on four climate mitigation outcomes: beliefs, policy support, information sharing intention, and an effortful tree-planting behavioral task. Across 59,440 participants from 63 countries, the interventions’ effectiveness was small, largely limited to non-climate-skeptics, and differed across outcomes: Beliefs were strengthened most by decreasing psychological distance (by 2.3%), policy support by writing a letter to a future generation member (2.6%), information sharing by negative emotion induction (12.1%), and no intervention increased the more effortful behavior–several interventions even reduced tree planting. Finally, the effects of each intervention differed depending on people’s initial climate beliefs. These findings suggest that the impact of behavioral climate interventions varies across audiences and target behaviors.

  • Are climate interventions more effective when aligned with the cultural values of a target population?

    As the climate crisis demands global engagement, it is crucial to understand how interventions influence individuals across cultural backgrounds. Are interventions more effective when aligned with the cultural values of a target population? To investigate, we evaluated eleven behavioral interventions aimed at stimulating climate change mitigation, along cultural individualism and collectivism orientations, in a large sample (N=59,440) spanning 63 countries. At baseline, we found the more individualistic a nation, the less its residents believed in climate change, supported mitigation policy, and intended to share information, but did not plant fewer trees in an online task. Critically, while some interventions were more effective in individualistic (decreasing psychological distance), and some in collectivistic (emphasizing social norms), others were effective in both (writing a letter to the future generation). These results reveal that individualism is a significant barrier to climate mitigation, interventions’ efficacy hinging on cultural contexts.

  • Does the language used to refer to the changing climate make a difference in attempts to mobilize the population into action?

    Despite widespread concern about climate change, a majority of people are not engaging in climate actions necessary to help decrease the risks posed by the climate emergency. Could the language used to refer to the changing climate make a difference in attempts to mobilize the population into action? Across two experiments (Ntotal=6,132, recruited globally in 63 countries in Study 1, and a replication in the US in Study 2), we explored whether climate terminology influenced the extent to which individuals were willing to engage in preventative action. We tested the differential effect of 10 frequently used terms (i.e., “climate change”, “climate crisis”, “global warming”, “global heating”, “climate emergency”, “carbon pollution”, “carbon emissions”, “greenhouse gasses”, “greenhouse effect”, “global boiling”). Despite high willingness to engage in climate action (74% in Study 1 and 57% in Study 2), the terms had no impact. Our results suggest that subtle differences in climate change language are not a barrier to climate action.

  • How do Internet Search engines portray climate change, and does that influence policy support?

    Despite the ubiquitous role Internet search engines have in information acquisition, little is known about how such algorithms portray climate change to different communities, and how these portrayals impact climate sentiment. In a sample of 47 countries, we found that preexisting nationwide climate change concern predicts the emotionality and action elicitation of climate change Google Image Search outputs. In a follow-up experiment we found that showing users images displayed in high preexisting concern nations (e.g., Venezuela) increases their climate policy support more than showing them images displayed in low preexisting concern nations (e.g., Estonia), suggesting a cycle of climate sentiment propagation facilitated by Internet Search algorithms.

  • How is gender inequality propagated by AI algorithms?

    Artificial intelligence (AI) algorithms are relied on to increase efficiency and objectivity, yet systemic social biases have been detected in these algorithm’s outputs. Here, we demonstrate that gender bias in a widely used internet search algorithm reflects the degree of gender inequality existing within a society. We then find that exposure to the gender bias patterns in algorithmic outputs can lead people to think and act in ways that reinforce societal inequality, suggesting a cycle of bias propagation between society, AI, and users. These findings call for an integrative model of ethical AI that combines human psychology with computational and sociological approaches to illuminate the formation, operation, and mitigation of algorithmic bias.

  • How to change false beliefs, even across ideological divides?

    Misinformation spread is among the top threats facing the world today. The current unprecedented level of exposure to false information leads people to confidently hold false beliefs. As a result, policymakers face an important challenge to design campaigns guided by empirical research to combat and prevent misinformation. One of the first steps in compiling empirically grounded recommendations is large-scale testing of interventions aimed at changing ideologically charged false beliefs. In this study, we reveal such an intervention: Engaging in prediction regarding surprising belief-related evidence and making large errors followed by immediate feedback led to the successful updating of the corresponding beliefs. This effect held across ideological boundaries, making it a viable strategy for reducing ideologically charged misinformation.