follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

Thursday
Jun282018

Is the perverse effect of AOT on political polarization confounded by a missing variable? Nah.

Interesting paper on "actively open-minded thinking" (AOT) and polarization of climate change beliefs:

Stenhouse, N., Myers, T.A., Vraga, E.K., Kotcher, J.E., Beall, L. & Maibach, E.W. The potential role of actively open-minded thinking in preventing motivated reasoning about controversial science. Journal of Environmental Psychology 57, 17-24 (2018).

As Jon Corbin & I did (A note on the perverse effects of actively open-minded thinking on climate-change polarization. Research & Politics 3 (2016), available at https://doi.org/10.1177/205316801667670), & notwithstanding a representation in the research "highlights," the study finds no evidence that AOT reduces political polarization over human-caused climate change. Also consistent with our findings, the study (according to the lead author in correspondence; the paper is ambiguous on this point) also found that AOT interacts with ideology, a relationship that generates the "perverse effect" that Jon & I reported.

Nevertheless, the authors of this paper purport to identify “significant problems with” Jon & my paper:

Specifically, we focus on the lack of a measure of scientific knowledge, or the interaction of scientific knowledge with political ideology, in their regression model. This is a problem because Kahan's own research (Kahan et al., 2012) has suggested that the interaction between scientific knowledge and ideology is an important influence on views on climate change, with higher scientific knowledge being associated with greater perceived risk for liberals, but lower perceived risk for conservatives.

“Controlling” for scientific literacy, the authors contend, vitiates the interaction between AOT and political outlooks.

Well, I decided to redo the analysis from Jon & my paper after plugging in the predictor for the Ordinary Science Intelligence scale (“scicomp_i”) and a cross-product interaction for AOT and OSI.  Nothing changed in relation to our finding that AOT interacts with ideology (“crxsc”), generating the “perverse effect” of increased polarization as AOT scores go up (the data set for the study is posted here and makes checking this out very easy).  So that “significant problem” with our anlaysis turns out not to be one.

No idea why the observed interaction disappeared for Stenhouse et al.  We are in the process of examining each other’s datasets to try to figure out why.

Stay tuned.

Tuesday
Jun262018

Reflections on "System 2 bias"--part 2 of 2


Part 2 of 2 of refletions on Miller & Sanjuro. Part 1 here.

So “yesterday”™ I presented some reflections on what I proposed calling “System 2 bias” (S2b).

As I explained, good System 2 reasoning in fact depends on intuitions calibrated to perceive a likely System 1 error and to summon the species of conscious, effortful information processing necessary to avoid such a mistake.

S2b occurs when one of those well trained  intuitions misfires.  Under its influence, a normally strong reasoner will too quickly identify and correct a judgment he or she mistakenly attributes to over-reliance on system 1, heuristic reasoning. 

As such, S2b will have two distinctive features.  One is that it will be made, paradoxically, much more readily by proficient reasoners, who possess a well stocked inventory of System 2-enabling intuitions, than by nonproficient ones, who don’t. 

The other is that reasoners who display this distinctive form of biased information processing will strongly resist the correction of it. The source of their mistake is a normally reliable intuition essential to seeing that a particular species of judgment is wrong or fallacious.  It is in the nature of all reasoning intuitions that they provoke a high degree of confidence that one’s perception of a problem and one’s solution to it are correct. It is the absence or presence of that feeling that tells a reasoner when to turn on his or her capacity for conscious, effortful information processing, and when to turn it off and move on.

I suggested that S2b was at the heart of the Miller-Sanjurjo affair.  Under the influence of S2b, GVT and others too quickly endorsed—and too stubbornly continue to defend—an intuitively pleasing but flawed analytical method for remedying patterns of thought that they believe reflect the misidentification of independent events (successes in basketball shots) as interdependent ones.

But this account is a product of informed conjecture only.  We should try to test it, if we can, by experiments that attempt to lure strong reasoners into the signature errors of S2b.

This is where the “Margolis” (1996, pp. 53f; a problem identified, helpfully, by Josh Miller as an adaptation of “Bertrand’s Box paradox”) comes in.

The right answers to “a,” “b,” and “c” are in fact “67%-67%-67%.” (If you are scratching your head on this, then realize that there are twice as many ways to get red if one selects the red-red chip than if one selects the blue-red one; accordingly, if one is picking from a vessel with red-red and red-blue, “red side up” will come twice as often for the red-red chip as it will for the red-blue one”…. Or realize that if you answered “67%” for “c,” then logically it must be 67% for “a” and “b” as well—for it surely doesn’t matter for purposes of “c” which color the slected chip displays…).

But “50%-50%-67%” is an extremely seductive “lure.”  We might predict then, that as reasoning proficiency increases, study subjects will become progressively more and more likely to pick “67%-67%-67%” rather than “50%-50%-67%.”

But that’s not what we see!

In fact, the likelihood of “50%-50%-67%” increases steadily as one’s Cognitive Reflection Test score increases.  In other words, one has to be pretty smart even to take the bait in the “Margolis”/“Bertrand Box Paradox” problem.  Those who score low on CRT are in fact all over the map: “33%-33%-33%,” “50%-50%-50%,” etc. are all more common guesses for subjects with low CRT scores than is “67%-67%-67%).”

Hence, we have an experimental model here of how “System 2 bias” works, one that demonstrates that certain types of error are more likely, not less, as cognitive proficiency increases.  For more of the same, see Peters et. al 2006, 2018)

This is a finding, btw, that has important implications for using the Margolis/Bertrand question as part of a standardized cognitive-proficiency assessment.  In short, either one shouldn’t use the item, b/c it has a negative correlation with performance of the remaining assessment items, or one should use the “wrong answer” as the right one for measuring the target reasoning disposition, since in fact getting the wrong answer is a better indicator of that disposition than is getting the right one.

As I said, the other signature attribute of this bias is how stubbornly those who display System 2 bias cling to the wrong answers it begats.  There is anecdotal evidence for this in Margolis (1996, pp. 53-56), which corresponds nicely to my own experience in trying to help those high in cognitive proficiency to see the “right” answer to this problem. Also, consider how many smart people tried to dismiss M&S when Gelman first freatured this M&S on his blog.

But it would be pretty cool to have an experimental proof of this aspect to the problem, too.  Any ideas anyone?

In any event, here you go: an example of an “S2b” problem where being smart correlates negatively with the right answer.

It’s not a knock down proof that S2b explains the opposition to the Miller-Sanjurjo proof.  But it’s at least a “brick’s worth” of evidence to that effect.

References

Margolis, H. Dealing with risk : why the public and the experts disagree on environmental issues (University of Chicago Press, Chicago, IL, 1996).

Miller, Joshua B. and Sanjurjo, Adam, Surprised by the Gambler's and Hot Hand Fallacies? A Truth in the Law of Small Numbers, Econometrica (2018). Available at SSRN: https://ssrn.com/abstract=2627354 or http://dx.doi.org/10.2139/ssrn.2627354

Peters et al., The loss‐bet paradox: Actuaries, accountants, and other numerate people rate numerically inferior gambles as superior. Journal of Behavioral Decision Making (2018), available at https://onlinelibrary.wiley.com/doi/abs/10.1002/bdm.2085.

Peters, E., et al. Numeracy and Decision Making. Psychol Sci 17, 407-413 (2006).

Friday
Jun222018

Fake news vs. "counterfeit social proof"--lecture summary & slides

Basic organization of talk I gave at the Lucerne conference (slides here).

I.  The public’s engagement with fake news is not credulous; it is motivated.

II.   “Fact checking” and like means of correcting false belief are unlikely to be effective and could in fact backfire.

III.  “Fake news” of the Macedonian variety is not particularly consequential: the identity-protective attitudes that motivate consumption of fake news will impel the same fact-distorting position-taking whether people are exposed to fake news or not.

IV.  What is potentially consequential are the forms of “counterfeit social proof” that the Russian government disseminated in the 2016 election.  These materials predictably trigger the identity-protective stance that makes citizens of diverse outlooks impervious to the truth.

V.  The form of information that is most likely to preempt or reverse identity-protective cognition features vivid and believeable examples of diverse groups evincing belief in action-guiding facts and forms of information.

Wednesday
Jun202018

Reflections on "System 2 bias," part 1 of 2 (I think)

Some thoughts about Miller & Sanjurjo, Part 1 of 2:

Most of the controversy stirred up by M&S centers on whether they are right about the methodological defect they detected in Gilovich, Vallone, and Tversky (1985) (GVT) and other studies of the “hot hand fallacy.”

I’m fully persuaded by M&S’s proof. That is, I get (I think!) what the problem is with GVT’s specification of the null hypothesis in this setting.

Whether in fact GVT’s conclusions about basketball shooting hold up once one corrects this defect (i.e., substitutes the appropriate null)  is something I feel less certain of, mainly because I haven’t invested as much time in understanding that part of M&S’s critique.

But what interests me even more is what the response to M&S tells us about cognition

The question, essentially, is how could so many extremely smart people (GVT & other empirical investigators; the legions of teachers who used GVT to instruct 1,000’s of students, et al.) have been so wrong for so long?! Why, too, does it remain so difficult to make those intelligent people get the problem M&S have identified?

The answer that makes the most sense to me is that the GVT and others were, ironically, betrayed by intuitions they had formed for sniffing out the general public’s intuitive mistakes about randomness.

The argument goes something like this:

I. The quality of cognitive reflection depends on well calibrated non-conscious intuitions.

There is no system 2 ex nihilo. Anything that makes it onto the screen of conscious reflection (System 2) was moments earlier residing in the realm of unconscious thought (System 1).  Whatever yanked that thought out and projected it onto the screen, moreover, was, necessarily, an unconscious mental operation of some sort, too.

It follows that reasoners who are adept at System 2 (conscious, deliberate, analytical) thinking necessarily possess well behaved System 1 (unconscious, rapid, affect-laden) intuitions. These intuitions recognize when a decisionmaking task (say, the detection of covariance) merits the contribution that System 2 thinking can make, and activates the appropriate form of conscious, effortful information processing.

In anyone lucky enough to have reliable intuitions of this sort, what trained them was, most likely, the persistent exercise of reliable and valid System 2 information processing, as brought to bear over & over in the process of learning how to be a good thinker.

In sum, System 1 and System 2 are best though of not as discrete and hierarchical modes of cognition but rather as integrated and reciprocal ones.

II.  Reflective thinkers possess intuitions calibrated to recognize and avoid the signature lapses in System 1 information processing.

The fallibility of intuition is at the core of all the cognitive miscues (the availability effect; hindsight bias; denominator neglect; the conjunction fallacy, etc.) cataloged by Kahneman and Tversky and their scholarly descendents (K&T et al.).  Indeed, good thinking, for K&T et al., consists in the use of conscious, effortful, System 2 reflection to “override” System 1 intuitions when reliance on the latter would generate mistaken inferences. 

As discussed, however, System 2 thinking cannot plausibly be viewed as operating independently of its own stable of intuitions, ones finely calibrated to recognize System 1 mistakes and to activate the sort of conscious, effortful thinking necessary to override them.

III. But like all intuitions, the ones relfective people rely on will be subject to characteristic forms of failure—ones that cause them to overestimate instances of overreliance on error-prone heuristic reasoning.

It doesn’t follow, though, that good thinkers will never be misled by their intuitions.  Like all forms of pattern recognition, the intuitions that good thinkers use will be vulnerable to recurring illusions and blind spots.

The sorts of failures in information processing that proficient thinkers experience will be predictably different from the ones that poor and mediocre thinkers must endure.  Whereas the latter’s heuristic errors expose them to one or another form of overreliance on System 1 information processing, the latter’s put them at risk of too readily perceiving that exactly that form of cognitive misadventure accounts for some pattern of public decisionmaking.

The occassions in which this form of “System 2 bias” will affect thinking are likely to be rare.  But when they occur, the intuitions that are their source will cling to individuals’ perceptions with the same dogged determination that the ones responsible for heuristic System 1 biases do.

Something like this, I believe, explains how the “ ‘hot hand fallacy’ fallacy” took such firm root. 

It’s a common, heuristic error to believe that independent events—like the outcome of two coin flips—are interdependent. Good reasoners are trained to detect this mistake and to fix it before making a judgment.

GVT spotted what they surmised was likely an instance of this mistake: the tendency of fans, players, and coaches to believe that positive performance, revealed by a short-term string of successful shots, indicated that a player was “hot.”

They tested for this mistake by comparing whether the conditional probability of a successful basketball shot following a string of successes differed significantly from a player’s unconditional probability of making a successful shot.

It didn’t. Case closed.

What didn’t occur to them, though, was that where one uses the sampling method they used—drawing from a finite series without replacement—Pr(basket|success, success, sucses) – Pr(basket) should be < 0. How much below zero it should be has to be determined analytically or (better) by computer simulation.

So if in fact Pr(basket|success, success, sucses) – Pr(basket) = 0, the player in question was on an improbable hot streak. 

Sounds wrong, doesn’t it? Those are your finely tuned intuitions talking to you; yet they’re wrong. . . .

I’ll finish off thise series “tomorrow.™”  In the meantime, read this problem & answer the three questions that pertain to it.  

Reference

Gilovich, T., Vallone, R. & Tversky, A. The hot hand in basketball: On the misperception of random sequences. Cognitive Psychology 17, 295-314 (1985).


 

 

Monday
Jun182018

Where am I? . . . Lucerne Switzerland

Should be interesting.  Will send postcard when I get a chance.

Thursday
Jun072018

Hey, everybody--come to this cool " 'Hot hand fallacy' fallacy" workshop!

If in or can make it to New Haven next Wed:

 paper here:

Wednesday
Jun062018

Shut up & update! . . . a snippet

Also something I've been working on . . . .

1. “Evidence” vs.”truth”—the law’s position. The distinction between “evidence for” a proposition and the “truth of” it is inscribed in the legal mind through professional training and experience.

Rule 401 of the Federal Rules of Evidence defines “relevance” as “any tendency” of an item of proof “to make a fact … of consequence” to the litigation either “more or less probable” in the estimation of the factfinder. In Bayesian terms, this position is equivalent to saying that an item of proof is “relevant” (and hence presumptively admissible; see Fed. R. Evid. 402) if, in relation to competing factual allegations, the likelihood ratio associated with that evidence is either less than or greater than 1 (Lempert 1977).  

Folksy idioms—e.g., “a brick is not a wall” [Rule 401, advisory committee notes])—are used to teach prospective lawyers that this “liberal” standard of admissibility does not depend on the power of a piece of evidence to establish a particular fact by the requisite standard of proof (“more probable than not” in civil cases; “beyond a reasonable doubt” in criminal cones).

Or in Bayesian terms, we would say that a properly trained legal reasoner does not determine “relevance” (and hence admissibility) by asking whether an item of proof will on its own generate a posterior estimate either for or against the “truth” of that fact. Again, because the process of proof is cumulative, the only thing that matters is that a particular piece of evidence have a likelihood ratio different from 1 in relation to competing litigation hypotheses.

2. “I don’t believe it . . . .” This popular response, among both pre- and post-publication peer reviewers, doesn’t get the distinction between “evidence for” and the “truth of” an empirical claim.

In Bayesian terms, the reviewer who treats his or her “belief” in the study result as informative is unhelpfully substituting his or her posterior estimate for an assessment of the likelihood ratio associated with the data. Who cares what the reviewer “believes”? Disagreement about the relative strength of competing hypotheses is, after all, the occasion for data collection! If a judge or lawyer can “get” that a “brick is not a wall,” then surely a consumer of empirical research can, too: the latter should be asking whether an empirical study has “any tendency … to make a fact … of consequence” to empirical inquiry either “more or less probable” in the estimation of interested scholars (this is primarily a question of the validity of the methods used and the probative weight of the study finding).

That is, the reviewer should have his or her eyes glued to  the likelihood ratio, and not be distracted by any particular researcher’s posterior.

3.  “Extraordinary claims require extraordinary proof . . . .” No, they really don’t.

This maxim treats the strength with which a fact is held to be true as a basis for discounting the likelihood ratio associated with contrary evidence. The scholar who takes this position is saying, in effect, “Your result should see the light of day only if it is so strong that it flips scholars from a state of disbelief to one of belief, or vice versa.” 

But in empirical scholarship as in law, “A brick is not a wall.”  We can recognize the tendency of a (valid) study result to make some provisional apprehension of truth less probable than it would otherwise be while still believing—strongly, even—that the contrary hypothesis so supported is unlikely to be true.

* * *

Or to paraphrase a maxim Feynman is sometimes (mis)credited with saying, “Shut up & update!”

References

Federal Rules of Evidence (2018) & Advisory Committee Notes.

Lempert, R.O. Modeling relevance. Michigan Law Review, 75, 1021-1057 (1977).

 


Tuesday
Jun052018

Fortifying #scicomm craft norms with empirical inquiry-- a snippet

From something I'm working on . . . .

This proposal is about the merger of two sources of insight into public science communication. 

The first comprises the professional judgment of popular-science communicators who typically disseminate knowledge through documentaries and related media. The currency of decisoinmaking for these communicators consists in experience-forged hunches about the interests and behavior of target audiences.

Like those of other professionals (Margolis 1987, 1993, 1996), these intuitive judgments are by no means devoid of purchasing power. Indeed, the characteristic problem with craft-based judgment is not that it yields too little practical guidance but that it at least sometimes yields too much: where professional disagreements persist over time, it is typical for both sides to appeal to shared experience and understandings to support plausible but opposing conjectures.

The second source of insight consists of empirical studies aimed at dissolving this constraint on professional judgment. The new “science of science communication” proposes that science’s own distinctive methods of disciplined observation and causal inference be made a part of the practice of professional science communication (Jaimieson, Kahan & Scheufele 2017). Such methods can, in particular, be used to generate evidence for evaluating the conflicting positions that figure in persistent professional disagreements.

What is persistently holding this research program back, however, is its principal location: the social science lab. 

Lab studies (including both observational studies and experiments) aspire to silence the cacophony of real-world influences that confound inference on how particular psychological mechanisms fortify barriers to public science comprehension.

But precisely because they test such hypotheses in experimentally pristine conditions, lab studies don’t on their own tell professional science communicators what to do.  Additional empirical research is necessary—in the field—to adjudicate between competing conjectures about how results observed in the lab can be reproduced in the real world (Kahan and Carpenter 2017; Kahan 2014).

The need for practitioner-scholar collaborations in such a process was one of the central messages of the recent National Academies of Science (2017) report  on the science of science communication.  “Through partnerships entailing sustained interaction with members of the . . . practitioner community, researchers come to understand local needs and circumstances, while . . . practitioners gain a better understanding of the process of research and their role in it” (ibid. p. 42). The current proposal responds to the NAS’s important prescription.

References

 Kahan, D.M. Making Climate-Science Communication Evidence-Based—All the Way Down. in Culture, Politics and Climate Change (ed. M. Boykoff & D. Crow) 203-220 (Routledge Press, New York, 2014).

 Kahan, D.M. & Carpenter, K. Out of the lab and into the field. Nature Climate Change 7, 309-10 (2017).

Jamieson, K.H., Kahan, D.M. & Scheufele, D.A. The Oxford Handbook of the Science of Science Communication (Oxford University Press, 2017).

Margolis, H. Dealing with risk : why the public and the experts disagree on environmental issues (University of Chicago Press, Chicago, 1996).

Margolis, H. Paradigms and Barriers (University of Chicago Press, Chicago, 1993).

Margolis, H. Patterns, Thinking, and Cognition ((University of Chicago Press, Chicago, 1987).

Monday
Jun042018

Still here . . .

 

Thursday
May032018

Guest post: early interest in science predicts long-term trust of scientitsts

Once again, we bring you the cutting edge of #scicomm science from someone who can actually do it! Our competitors can only watch in envy.

The Enduring Effects of Scientific Interest on Trust in Climate
Scientists in the U.S.

Matt Motta (@matt_motta)

Image result for matt motta minnesotaAmericans’ attitudes toward scientists are generally positive. While trust in the scientific community has been on the decline in recent years on the ideological right, Americans are usually willing to defer to scientific expertise on a wide range of issues.

Americans’ attitudes toward climate scientists, however, are a notable exception. Climate scientists are amongst the least trusted scientific authorities in the U.S., in part due to low levels of support from Republicans and Independents.

A recent Pew study found that less than a third (32%) of Americans believe that climate scientists’ research is based on the “best available evidence,” most of the time. Similar numbers believe that climate scientists are mostly influenced by their political leanings (27%) and the desire to advance their careers (36%).

Why do (some) Americans distrust climate scientists? This is an important question, because (as I have shown in previous research) negativity toward scientists is associated with the rejection of scientific consensus on issues like climate change. It is also associated with support for political candidates (like George Wallace and Donald Trump) that are skeptical of the role experts play in the policymaking process.

Figuring out why Americans distrust climate scientists may be useful for devising new strategies to rekindle that trust. Previous research has done an excellent job documenting the effects of political ideology on trust in climate scientists. Few, however, have considered the effect of Americans’ interest in science and knowledge of basic scientific principles – both of which have been linked to positivity toward science and scientists.

In a study recently published at Nature Climate Change, I demonstrate that interest in scientific topics at young ages (12-14)  is associated with increased trust in climate scientists decades later in adulthood, across the ideological spectrum. 

In contrast, I find little evidence that young adults’ levels of science comprehension (i.e., science knowledge and quantitative skills) increase trust later in life. To the extent that they do, the effects of science knowledge and quantitative ability tend to be strongly conditioned by ideology.

In addition to considering the effects of science interest and comprehension on trust in climate scientists, my work offers two additional points of departure from previous research. First, few have investigated these potential determinants of attitudes toward climate scientists in young adulthood. This is surprising, because previous research has found that this is a critical stage in the development of attitudes toward science.

Second, fewer still have studied how these factors might interact with political ideology to shape opinion toward climate scientists. As readers of this blog might expect, Americans who are highly interested in science should exhibit higher levels of trust across the ideological divide. This is consistent with research suggesting that science curiosity encourages open-minded engagement with scientific issues – thereby increasing acceptance of science and scientific consensus.

In contrast, science comprehension should polarize opinions about climate scientists along ideological lines. If science knowledge and quantitative skills increase trust in climate scientists, we might expect this effect to be greater for liberals – who tend to be more accepting of climate science than conservatives. Again familiar to readers of this blog, this point is consistent with research showing that people who “think like scientists” tend to use their skills to reinforce existing social, political, and cultural group allegiances.

Using panel data from the Longitudinal Study of American Youth (LSAY) I model American adults’ trust in climate scientists (in 2011) as a function of their science interest and comprehension measured at ages 12-14 (in 1987). I structure these models hierarchically because respondents were cluster sampled at the school level, and control for several potentially-relevant demographic factors (e.g., race, sex). For a more-technical discussion of how I do this, please consult the study’s methods section (just after the discussion).

I measure Americans’ trust in scientists using self-reported measures of trust in information from four different different groups; science professors, state environmental departments, NASA/NOAA, and the Intergovernmental Panel on Climate Change (IPCC). I also look at a combined index of all four.

I then measure science interest using a self-reported measure of respondents’ self-reported interest in “science issues.” I also operationalize science comprehension using respondents’ scores on standardized science knowledge and quantitative ability tests.

The results suggest that self-reported science interest at young ages is associated with trust in climate scientists about two decades later (see the figure below). On average, science interest in young adulthood is associated with about a 6% increase in trust in climate scientists. Young adults’ science knowledge and quantitative skills, on the other hand, bear little association with trust in climate scientists measured years later. 

The effects of science interest in young adulthood hold when factoring levels of science interest measured in adulthood into the model. I find that science interest measured in young adulthood earlier explains more than a third (36%) of the variable’s cumulative effect on trust in climate scientists.

Critically, and perhaps of most interest to readers of this blog, I find that the effects of interest are not conditioned by political ideology. Interacting science interest with political ideology, I find that young adults who are highly interested in science are more trusting of climate scientists – irrespective of their ideological allegiances.

In contrast, the effect of science comprehension in young adulthood on trust in climate scientists is significantly stronger for ideological liberals. This was true in nearly every case, for both science knowledge and quantitative skills. The lone exception is that the interaction between quantitative skills and ideology fell just short of one-tailed significance in the NASA/NOAA model (p = 0.13), and two-tailed significance in the IPCC model (p = 0.06).

As I discuss in the paper, these results suggest an exciting path forward for rekindling public trust in climate scientists. Efforts to boost scientific interest in young adulthood may have lasting effects on trust, decades later.

What these efforts might look like, of course, is an open question. Board and video games aimed at engaging young audiences could potentially be effective. A key challenge, however, will be to figure out how to use these tools to engage young adult audiences that are not already highly interested in scientific topics. 

I also think that this research underscores the usefulness of longitudinal approaches to studying Americans’ attitudes toward science. Future research should investigate whether or not these dynamics hold for Millennials and Generation Z (who tend to be more accepting of scientific consensus on climate change than older generations) is an interesting question, and one future longitudinal research should attempt to answer. 

 

Sunday
Apr292018

Weekend update: Precis for "are smart people ruining democracy? What about curious ones?"

This is a follow up on  this:

Whence political polarization over seemingly complex empirical issues essential to enlightened self-government? 

The answer is not what many smart peolple surmise.  Lots of public opinion analysts, including a large number who hold university appointments, assume the phenomenon of polarization originates in the public's over-reliance on heuristic reasoning (the fast, intuitive, emotional sort that Kahneman calls "System 1”).

As plausible as this conjecture might be, though, it turns out to be wrong.  Flat out, indisputably, beyond-a-reasonable-doubt wrong. 

An already immense and still growing body of research in the decision sciences demonstrates that the citizens most disposed to engage in conscious, effortful information processing (Kahneman’s “slow,” “System 2” thinkers) are in fact the most polarized ones on the facts of climate change, gun control, fracking, nuclear power, etc. 

It would be a silly interpretation of these data to mean “smart” citizens are “ruining democracy.” But what isn’t silly at all is the conclusion that our “science communication environment” has become polluted by the entanglement of positions on policy-relevant facts, on the one hand, and individuals’ cultural identities, on the other.

If one tries to make people choose between knowing what science knows and being who they are, they will predictably choose the latter.  It’s that simple.  When that happens, moreover, democracy loses the contribution that its most cognitively proficient members normally make to guiding their peers into stances consistent with the best available evidence on real threats to their wellbeing and how to counteract them.

But the news is not relentlessly bad:  New work shows that culturally diverse citizens who are curious about science display signs of immunity to the “identity-protective cognition” dynamic that I have just described.

Understanding why their interest in science protects citizens from the baleful consequences of a polluted science communication environment—and how that dynamic might be self-consciously  harvested and deployed within democratic societies—is now one of the most urgent objectives of the new “science of science communication.”

Friday
Apr272018

What's more disgusting--fecal transplants or semi-automatic guns? (Data collected far in advance of Las Vegas and other mass shootings)

Hmmmm... Makes you wonder, doesn't it? 

More "tomorrow."

Thursday
Apr262018

Still on pace for 61-plus lecture/workshop presentations in 2018 (& I'm not even using testosterone or HGH; or at least not a whole lot)

Monday
Apr232018

WSMD? JA! Who perceives risk in Artificial Intelligence & why?.. Well, here's a start

shit ...

This is approximately the 470,331st episode in the insanely popular CCP series, "Wanna see more data? Just ask!," the game in which commentators compete for world-wide recognition and fame by proposing amazingly clever hypotheses that can be tested by re-analyzing data collected in one or another CCP study. For "WSMD?, JA!" rules and conditions (including the mandatory release from defamation claims), click here.

How would you feel if I handed over the production of this Blog (including the drafting of each entry) to an artificially intelligent agent? (Come to think of it, how do you know I didn’t do this months or even years ago?)

I can’t answer with nearly the confidence that I’d like, but having looked more closely at some data, I think I know about 1500% more, and even better 1500% less,  about who fears artificial intelligence, who doesn’t, & why.

The data analysis was peformed in response to an WSMD? JA! query by @RossHartshorn, who asked:

 

In a follow up email, @Ross offered up his own set of the hypotheses, thereby furnishing me with a working conjecture to try to test with CCP data.

In all of the models that follow, I use the “Industrial Strength Risk Perception Measure” (ISRPM)—because that’s all I have got & because having that definitely gives me a  pretty damn good divining rod should I care to go out hunting for even more relevant data in future studies.