1,034 research outputs found

    Climate communication for biologists: when a picture can tell a thousand words

    Get PDF
    Pictures often tell a story better than the proverbial 1,000 words. However, in connection with climate change, many pictures can be highly misleading, for example, when a snowball is used to ridicule the notion of global warming or when a picture of a dead crop is supposed to alert people to climate change. We differentiate between such inappropriate pictures and those that can be used legitimately because they capture long-term trends. For example, photos of a glacier’s retreat are legitimate indicators of the long-term mass balance loss that is observed for the vast majority of glaciers around the world

    Modeling working memory: a computational implementation of the Time-Based Resource-Sharing theory

    Get PDF
    Working memory is a core concept in cognition, predicting about 50% of the variance in IQ and reasoning tasks. A popular test of working memory is the complex span task, in which encoding of memoranda alternates with processing of distractors. A recent model of complex span performance, the Time-Based-Resource-Sharing (TBRS) model of Barrouillet and colleagues, has seemingly accounted for several crucial findings, in particular the intricate trade-off between deterioration and restoration of memory in the complex span task. According to the TBRS, memory traces decay during processing of the distractors, and they are restored by attentional refreshing during brief pauses in between processing steps. However, to date, the theory has been formulated only at a verbal level, which renders it difficult to test and to be certain of its intuited predictions. We present a computational instantiation of the TBRS and show that it can handle most of the findings on which the verbal model was based. We also show that there are potential challenges to the model that await future resolution. This instantiated model, TBRS*, is the first comprehensive computational model of performance in the complex span paradigm. The Matlab model code is available as a supplementary material of this articl

    The pause in global warming:Turning a routine fluctuation into a problem for science

    Get PDF
    Abstract There has been much recent published research about a putative “pause” or “hiatus” in global warming. We show that there are frequent fluctuations in the rate of warming around a longer-term warming trend, and that there is no evidence that identifies the recent period as unique or particularly unusual. In confirmation, we show that the notion of a pause in warming is considered to be misleading in a blind expert test. Nonetheless, the most recent fluctuation about the longer-term trend has been regarded by many as an explanatory challenge that climate science must resolve. This departs from long-standing practice, insofar as scientists have long recognized that the climate fluctuates, that linear increases in CO2 do not produce linear trends in global warming, and that 15-yr (or shorter) periods are not diagnostic of long-term trends. We suggest that the repetition of the “warming has paused” message by contrarians was adopted by the scientific community in its problem-solving and answer-seeking role and has led to undue focus on, and mislabeling of, a recent fluctuation. We present an alternative framing that could have avoided inadvertently reinforcing a misleading claim.</jats:p

    The ‘Alice in Wonderland’ mechanics of the rejection of (climate) science:simulating coherence by conspiracism

    Get PDF
    Science strives for coherence. For example, the findings from climate science form a highly coherent body of knowledge that is supported by many independent lines of evidence: greenhouse gas (GHG) emissions from human economic activities are causing the global climate to warm and unless GHG emissions are drastically reduced in the near future, the risks from climate change will continue to grow and major adverse consequences will become unavoidable. People who oppose this scientific body of knowledge because the implications of cutting GHG emissions—such as regulation or increased taxation—threaten their worldview or livelihood cannot provide an alternative view that is coherent by the standards of conventional scientific thinking. Instead, we suggest that people who reject the fact that the Earth’s climate is changing due to greenhouse gas emissions (or any other body of well-established scientific knowledge) oppose whatever inconvenient finding they are confronting in piece-meal fashion, rather than systematically, and without considering the implications of this rejection to the rest of the relevant scientific theory and findings. Hence, claims that the globe “is cooling” can coexist with claims that the “observed warming is natural” and that “the human influence does not matter because warming is good for us.” Coherence between these mutually contradictory opinions can only be achieved at a highly abstract level, namely that “something must be wrong” with the scientific evidence in order to justify a political position against climate change mitigation. This high-level coherence accompanied by contradictory subordinate propositions is a known attribute of conspiracist ideation, and conspiracism may be implicated when people reject well-established scientific propositions

    Why does higher working memory capacity help you learn?

    Get PDF
    Algorithms for approximate Bayesian inference, such as Monte Carlo methods, provide one source of models of how people may deal with uncertainty in spite of limited cognitive resources. Here, we model learning as a process of sequential sampling, or ‘particle filtering’, and suggest that an individual’s working memory capacity (WMC) may be usefully modelled in terms of the number of samples, or ‘particles’, that are available for inference. The model qualitatively captures two distinct effects reported recently, namely that individuals with higher WMC are better able to (i) learn novel categories, and (ii) flexibly switch between different categorization strategie

    Processing political misinformation:comprehending the Trump phenomenon

    Get PDF
    his study investigated the cognitive processing of true and false political information. Specifically, it examined the impact of source credibility on the assessment of veracity when information comes from a polarizing source (Experiment 1), and effectiveness of explanations when they come from one's own political party or an opposition party (Experiment 2). These experiments were conducted prior to the 2016 Presidential election. Participants rated their belief in factual and incorrect statements that President Trump made on the campaign trail; facts were subsequently affirmed and misinformation retracted. Participants then re-rated their belief immediately or after a delay. Experiment 1 found that (i) if information was attributed to Trump, Republican supporters of Trump believed it more than if it was presented without attribution, whereas the opposite was true for Democrats and (ii) although Trump supporters reduced their belief in misinformation items following a correction, they did not change their voting preferences. Experiment 2 revealed that the explanation's source had relatively little impact, and belief updating was more influenced by perceived credibility of the individual initially purporting the information. These findings suggest that people use political figures as a heuristic to guide evaluation of what is true or false, yet do not necessarily insist on veracity as a prerequisite for supporting political candidates

    Modeling working memory: An interference model of complex span

    Get PDF
    This article introduces a new computational model for the complex-span task, the most popular task for studying working memory. SOB-CS is a two-layer neural network that associates distributed item representations with distributed, overlapping position markers. Memory capacity limits are explained by interference from a superposition of associations. Concurrent processing interferes with memory through involuntary encoding of distractors. Free time in-between distractors is used to remove irrelevant representations, thereby reducing interference. The model accounts for benchmark findings in four areas: (1) effects of processing pace, processing difficulty, and number of processing steps; (2) effects of serial position and error patterns; (3) effects of different kinds of item-distractor similarity; and (4) correlations between span tasks. The model makes several new predictions in these areas, which were confirmed experimentall

    Climate Change, Disinformation, and How to Combat It

    Get PDF
    corecore