follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« How would scientists do on a public "science literacy" test -- and should we care? Stocklmayer & Bryant vs. NSF Indicators “Science literacy” scale part 2 | Main | How to achieve "operational validity"? Translation Science! »
Monday
Jul282014

Undertheorized and unvalidated: Stocklmayer & Bryant vs. NSF Indicators “Science literacy” scale part I

The paper isn’t exactly hot off the press, but someone recently lowered my entropy by sending me a copy of Stocklmayer, S. M., & Bryant, C. Science and the Public—What should people know?, International Journal of Science Education, Part B, 2(1), 81-101 (2012)

Cool article!

The piece critiques the NSF’s Science Indicators “factual knowledge” questions.

As is well known to the 9.8 billion readers of this blog (we’re down another couple billion this month; the usual summer-holiday lull, I’m sure), the Indicators battery is pretty much the standard measure for public “science literacy.”

The NSF items figure prominently in the scholarly risk perception/science communication literature. 

With modest additions and variations, they also furnish a benchmark for various governmental and other official and semi-official assessments of “science literacy” across nations and within particular ones over time.

I myself don’t think the Indicators battery is invalid or worthless or anything like that.

But like pretty much everyone I know who uses empirical methods to study public science comprehension, I do find the scale unsatisfying

What exactly a public sicence comprehension scale should measure is itself a difficult and interesting question. But whatever answer one chooses, there is little reason to think the Indicators’ battery could be getting at that.

The Indicators battery seems to reduce “science literacy” to a sort of catechistic assimilation of propositions and principles: “The earth goes around the sun, not the other way ’round”[check];  “electrons are smaller that atoms” [check]; “antibiotics don’t kill viruses—they kill bacteria!,” [check!].

We might expect an individual equipped to reliably engage scientific knowledge in making personal life decisions, in carrying out responsibilities inside of a business or as part of a profession, in participating in democratic deliberations, or in enjoying contemplation of the astonishing discoveries human beings have made about the workings of nature will have become familiar with all or most of these propositions.

NSF Indicators "factual knowledge" battery & int'l results (click it!)But simply being familiar with all of them doesn’t in itself furnish assurance that she’ll be able to do any of these things.

What does is a capacity—one consisting of the combination of knowledge, analytical skills, and intellectual dispositions necessary to acquire, recognize, and use pertient scientific or empirical information in specified contexts.  It’s hardly obvious that a high score on the NSF’s “science literacy” test (the mean number of correct reponses in a general population sample is about 6 of 9) reliably measures any such capacity—and indeed no one to my knowledge has ever compiled evidence suggesting that it does. 

This—with a lot more texture, nuance, and reflection blended in—is the basic thrust of the S&B paper.

The first part of S&B consists of a very detailed and engaging account of the pedigree and career of the Indictors’ factual-knowledge items (along with various closely related ones used to supplement them in large-scale recurring public data collections like the Eurobarometer). 

What’s evident is how painfully innocent of psychometric and basic test theory this process has been.

The items, at least on S&B’s telling, seem to have been selected casually, more or less on the basis of the gut feelings and discussions of small groups of scientists and science authorities.

Aside from anodyne pronouncements on the importance of “public understanding of science” to “national prosperity,” “the quality of public and private decision-making,” and “enriching the life of the individual,” they made no real effort to articulate the ends served by public “science literacy.” As a result, they offered no cogent account of the sorts of knowledge, skills, dispositions, and the like that securing the same would entail.

Necessarily, too, they failed to identify the constructs—conceptual representations of particular skills and dispositions—an appropriately designed public science comprehension scale should measure. 

Early developers of the scale reported Cronbach’s alpha and like descriptive statistics, and even performed factor analysis that lent support to the inference that the NSF “science literacy” scale was indeed measuring something.

Eurobarometer variantBut without any theoretical referent for what the scale was supposed to measure and why, there was necessarily no assurance that what was being measured by it was connected to even the thinly specified objectives the proponents of them had in mind.

So that’s the basic story of the first part of the S&B article; the last part consists in some related prescriptions.

Sensibly, S&B call for putting first things first: before developing a measure, one must thoughtfully (not breezily, superficially) address what the public needs to know and why: what elements of science comprehension are genuinely important in one or another of the contexts, to one or another of the roles and capacities, in which ordinary (nonexpert) members of the public make use of scientific information?

S&B suggest, again sensibly, that defensible answers to these questions will likely support what the Programme for International Student Assessment characterizes as an “assets-based model of knowledge” that emphasizes “the skills people bring to bear on scientific issues that they deal with in their daily lives.”  (Actually, the disconnect between the study of public science comprehension and the vast research that informs standardized testing, which reflects an awe-inspiring level of psychometric sophistication, is really odd!) 

Because no simple inventory of “factual knowledge” questions is likely to vouch for test takers’ possession of such a capacity, S&B propose simply throwing out the NSF Indicators battery rather than simply supplementing it (as has been proposed) with additional "factual knowledge" items on “topics of flight, pH, fish gills, lightning and thunder and so on.”

Frankly, I doubt that the Indicators battery will ever be scrapped. By virtue of sheer path dependence, the Indicators battery confers value as a common standard that could not easily, and certainly not quickly, be replaced. 

In addition, there is a collective action problem: the cost of generating a superior, “assets-based” science comprehension measure—including not only the toil involved in the unglamorous work of item development, but also the need to forgo participating instead in exchanges more central to the interest and attention of most scholars—would be borne entirely by those who create such a scale, while the benefits of a better measure would be enjoyed disproportionately by other scholars who’d then be able to use it.

I think it is very possible, though, that the NSF Indicators battery can be made to evolve toward a scale that would have the theoretical and practical qualities that S&B.

As they investigate particular issues (e.g., the relationship between science comprehension and climate change polarization), scholars will likely find it useful to enrich the NSF Indicators batter through progressive additions and supplementations, particularly with items that are known to reliably measure the reasoning skills and dispositions necessary to recognize and make use of valid empirical information in everyday decisionmaking contexts.

That, anyway, is the sort of process I see myself as trying to contribute to by tooling around with and sharing information on an “Ordinary science intelligence” instrument for use in risk perception and science communication studies.

Even that process, though, won’t happen unless scholars and others interested in public science comprehension candidly acknowledge the sorts of criticisms S&B are making of Indicators battery; unless they have the sort of meaningful discussion S&B propose about who needs to know what about science and why; and unless scholars who use the Indicators battery in public science comprehension research explicitly address whether the battery can reasonably be understood to be measuring the forms of knowledge and types of reasoning dispositions on which their own analyses depend.

So I am really glad S&B wrote this article!

Nevertheless, “tomorrow,” I’ll talk about another part of the S&B piece—a survey they conducted of 500 scientists to whom they administered the Indicators’ “factual knowledge” items—that I think is very very cool but actually out of keeping with the central message of their paper! 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (14)

Actually, the disconnect between the study of public science comprehension and the vast research that informs standardized testing, which reflects an awe-inspiring level of psychometric sophistication, is really odd!

Odd, but common. I've seen this sort of thing all too often. Good research scientists in some of the more traditional disciplines sometimes act as though they're thinking, "We're smart. We can do this. We don't need to know the stuff done in these grubby other fields."

Well, that's actually my interpretation of what's going on in their heads. It could just be that I don't do a good job of conveying that there are people out there who are really good at X, and we should take advantage of it and learn from them before trying to re-create their world anew.

July 28, 2014 | Unregistered CommenterJohn Gorentz

Hypothesis:

If all your present data on science literacy were to have an unfortunate encounter with a black hole, but in utter contempt for the laws of physics that black hole were to spit back out spreadsheets replacing the science literacy column with a general intelligence column, the correlations you find with intensity of political belief would not change one bit.

Supposing the experimental results aligned with my expectations, we might conclude a couple things. First, many of your observed correlations are the result of "smarter people are better at arguing." Second, that old conventional wisdom from Bert Russel, “The fundamental cause of the trouble is that in the modern world is the stupid are cocksure while the intelligent are full of doubt” is really more conventional arrogance, likely the opposite of the truth.

July 30, 2014 | Unregistered CommenterRyan

Dan -

Off topic.

Not a very scholarly article - but it does provide a good overview of why I think that a focus on "world view" causality underestimates other components related to motivated reasoning. It certainly nails a lot of what I see in the blogospheric discussions of climate change.

http://www.cracked.com/article_19468_5-logical-fallacies-that-make-you-wrong-more-than-you-think.html

August 1, 2014 | Unregistered CommenterJoshua

@Ryan--

There are 2 ways to understand your point, both of which are interesting.

1. Like more specific reasoning dispositions that "ordinary science intelligence" (OSI) comprises, "general intelligence" (G) magnifies those aspects of cultural cognition associated with political conflict over risk & like facts. I strongly suspect this is true; it hard actually to see how it would not be so -- how dispositions like critical reflection and numeracy could interact w/ cultural cognition if they weren't just particular instances of a more general recruitment of reasoning proficiency to protect identity.

2. The dispositions associated with OSI reduce to-- are not meaninigfully distinct from -- G. This could be; and if it were, then one would face the questions whether (a) the measures for the constructs associated with OSI (numeracy, critical reflection, science literacy & like) aren't valid (b/c they are just g in disguise) or (b) the constructs themselves are not meaningfully distinct from G. In latter case, then the project of trying to separate out critical reasoning or rationality from intelligence as a cognitive matter & as an object of special focus in education. I think neither of these things are likely true, but admit that one could marshal evidence for either of those claims. Stanovich's work is best here. Take a look, e.g., at Stanovich, Keith E. "Distinguishing the reflective, algorithmic, and autonomous minds: Is it time for a tri-process theory." In two minds: Dual processes and beyond (2009): 55-88.

August 1, 2014 | Registered CommenterDan Kahan

@Joshua--

I don't know what is at stake, really, in parceling out the contribution of different influences on motivated reasoning, but certainly it would be wrong to say "cultural cognition" or "identity protection" is only thing that matters.

But the grab-bag approach toward mechanisms reflected in the linked article is geared more toward enabling confabulation & story-telling proficiency than genuinely explaining, predicting & influencing.

August 1, 2014 | Registered CommenterDan Kahan

Dan -

The article explains complicated heuristics in ways that make them seem overly simple - but I think that the overall effect of the article might very well be to stimulate people to examine their assumptions in ways that might make them more aware of how their reasoning is biased.

For example:

==> So, the next time you find yourself desperately Googling for some factual example that proves your argument is right, and failing to find even one, stop. See if you can put the brakes on and actually say, out loud, "Wait a second. If the things I'm saying in order to bolster my argument are consistently wrong, then maybe my argument is also wrong."

I mean not quite the same as what they described, but I felt a pang of self-recognition when I read that - remembering times that I've Googled for evidence only to support an argument that I've presented, bookmarking the links that helped and ignoring the links that didn't.

I mean sure, obviously, your 9.800000001 billion readers (one of my friends just started reading your blog - adding to the previous figure of 9.8 billion) know that the article was simplistic, but the tiny % of people on the planet who don't read your blog might be helped even a bit by simplistic descriptions of influences that bias reasoning.

August 1, 2014 | Unregistered CommenterJoshua

More off-topic -

Looks like you've hit the big time - the Heartland Conference!!!!

Look at his first slide.

http://climateconferences.heartland.org/tom-harris-iccc9-panel-3/

Finally what you have been asking for: "skeptics" referring to your analysis of effective communication strategies.

Or is this a case of be careful what you ask for?

August 3, 2014 | Unregistered CommenterJoshua

BTW - I love the logic of Harris, where he talks about how to win over "climate alarmists" to the cause of "climate realism," without, apparently, any awareness of how oblivious he is to the polarizing (identity-protective and identity-defensive) nature of the language he's using.

I mean it really is quite spectacular. A complete failure of basic skeptical due diligence. I wonder if he calls himself a "skeptic."

August 3, 2014 | Unregistered CommenterJoshua

"Looks like you've hit the big time - the Heartland Conference!!!!"

I noticed Dan's name popped up at Climate Audit, too! (The post on Cook's fake ethics approval.)

It's those 9.8 billion readers - word gets around.

"BTW - I love the logic of Harris, where he talks about how to win over "climate alarmists" to the cause of "climate realism," without, apparently, any awareness of how oblivious he is to the polarizing (identity-protective and identity-defensive) nature of the language he's using."

Yeah. He'll fit in very nicely with all the people talking about how to win climate 'deniers' over, or working to 'counter their misinformation', or whatever. That does seem to be how a lot of people see it - a tool for persuasion.

August 3, 2014 | Unregistered CommenterNiV

==> "He'll fit in very nicely with all the people talking about how to win climate 'deniers' over"

I think you are making a (in the end, fairly inconsequential) mistake there.

Most realists that I see talking about "deniers" feel that "deniers" can never be "won over." I see them talk, rather often, about how attempting to do so would be an exercise in futility. They (usually) say that their goal is to win over the uncommitted (while, IMO, illogically employing the rhetoric of "deniers" to do so under the, IMO, mistaken belief that calling people "deniers" will win over the uncommitted).

But Harris actually says that he wants to win over "climate alarmists" to the cause of "climate realism." The obliviousness of trying to win over the very people that you're calling "alarmists," and trying to "win over" people by implying that their views on climate change are something other than "realist," is quite fascinating. What would lead a smart and knowledgeable person to make such an illogical argument?

Of course, there are those "skeptics" who think that any "progressive" or "liberal" is inherently too stupid or immoral to ever be won over by "climate realism" - but that isn't Harris' argument. Not that I'm saying those other "skeptics" should win a logic award for their reasoning - but at least their argument is perhaps a different flavor of obliviousness than Harris'.

August 4, 2014 | Unregistered CommenterJoshua

"Most realists that I see talking about "deniers" feel that "deniers" can never be "won over.""

They end up that way, after everything they've tried has failed. (Obviously, it must be something wrong with the 'deniers' rather than with their own arguments.)

But there are several stages before that, and some "communicators" are still working their way through them.
They start off assuming the simple ad verecundiam and correlation arguments that persuaded them will work, then they go through a stage of researching the issue with one-liner 'rebuttal lists', then some dig a bit deeper, and *then* they give up, and scale back their ambitions to just persuading people who don't know much about it, or care.

I think the 'communication science' thing revived their hopes a bit. The whole 'Republican Brain' thing appealed to their worldview, and it explained what had gone wrong previously and how to fix it. All they had to do was use 'framing' to translate their arguments into language that wouldn't trigger the primitive lizard-brain reflexes of their opponents, and the truth would slip in through the cracks and all would be joyful.

Well, they've been trying it for what, five years? Six? Seven? And as Dan has pointed out several times, the general public perception on climate change, even of the 'uncommitted' (assuming there are any) has not moved at all. That one doesn't work, either. Perhaps there's still something wrong with the way they're doing it?

http://talkingclimate.org/george-marshall-how-to-talk-to-a-climate-change-denier/

"What would lead a smart and knowledgeable person to make such an illogical argument?"

You're missing the point that he's talking to a different audience. He's talking to the Heartland conference, and therefore uses language that fits in with the worldview of that audience in order to persuade them that his ideas have merit. Obviously when they *use* the techniques they won't use that terminology.

The illogic, I think, is not in using divisive language to describe it, but in failing to recognise that the climate believers have just tried out all these techniques on us and it didn't work.

The fundamental point everyone misses when they try to use this as a tool for persuasion is that it is symmetrical. The problems in reasoning it identifies applies to both sides, and the proposed solutions can be used equally easily by both sides. And everybody thinks 'This sort of cognitive bias explains how the other side can think as they do, but obviously it doesn't apply to the position taken by my side.' Everybody thinks 'Oh, those methods obviously wouldn't work on me, but I'm sure they'll work just fine on those less-sophisticated people over there.' It's the Republican Brain thesis all over again.

The truth is, nobody in the 'science communication' business really believes it can be true, because if they did they would have to abandon their own position first. They would have to accept that they cannot know for sure that any of their own beliefs and knowledge are true, because it could just be the result of motivated reasoning. They can't rely on the opinions of scientists, because scientists are as human and as subject to motivated reasoning as anyone. (More so, if you extrapolate Dan's results on science literacy in the obvious way.) It's a universal solvent for certainty.

So of course everyone tries to use it to dissolve their opponent's beliefs, and keeps it well away from their own. They try to use it to persuade, rather than to recognise that we're all in far more trouble than that and if they release this principle uncontrolled there will soon be no truth or knowledge left. They need to realise that they need to figure out some way round it, some set of techniques to achieve some degree of certainty even with imperfect instruments, and to teach them to everyone. They need to rediscover the philosophy of Science.

August 4, 2014 | Unregistered CommenterNiV

==> "You're missing the point that he's talking to a different audience. He's talking to the Heartland conference, and therefore uses language that fits in with the worldview of that audience in order to persuade them that his ideas have merit. Obviously when they *use* the techniques they won't use that terminology."

First, he is speaking publicly and being videotaped. His language is a matter of public record. He is on record being a tribalist. Any future attempts to "win over" "climate alarmists" to "climate realism" will be undermined by the argument he makes on the clip - whether he was talking to a particular audience or not.

Second,unless you think that he's lying to the Heartland audience about his real beliefs, he is stating his belief. He believes that one side is "alarmist" and the other side is "realist." I haven't watched the whole clip, but I'm willing to bet that he doesn't say to his audience something along the lines of: "Now of course, we all know that using the kind of language I am using here would be counterproductive when trying to "win over" "alarmists" to "climate realism," so don't use the language I have used here. When you're communicating with those "climate alarmists" be sure to hide your actual beliefs that they are "alarmists" by using other terms to refer to them.

Unless in all future attempts to communicate to those "alarmists" about "realism," he lies or hides his real beliefs he is showing what his strategy is. And if he does lie and hide his real beliefs in his communication efforts when he is talking to the "alarmists" about "realism," he will easily be undermined by something as basic as this videotape.

Once again, NiV, even though I believe that you believe that the biasing influence of motivated reasoning is symmetrical, you will often seek to rationalize motivated reasoning on the part of "skeptics" in a transparent way to explain their actions - which are rooted in motivated reasoning - as logical or "skeptical."

You first compared Harris' advice about communicative strategies for winning over "alarmists" to evidence of motivated reasoning among realists (in my view creating an inaccurate parallel - a point you basically ignored) as a kind of "well they do it too" defense, and then you went on to explain how his reasoning is for some reason excluded from the kind of illogical reasoning that is explainable by motivated reasoning. He thinks that calling people "alarmists" He is giving illogical advice about strategy to employ. In that sense, I think it is as illogical as when "realists" say that the way to convince the "unconvinced" is to call out the "deniers." Motivated reasoning distorts the thinking process of smart and knowledgeable people when the subject they're discussing overlaps with cultural, political, social, or psychological identifications. When that happens, they engage in identity-aggressive and identity-defensive behaviors as evidenced in Harris' language. Harris' talk, IMO, is not really about communication strategies, but it's about solidifying the group identification among him and his buds at the conference, and using a tactic of demonizing the "other" as one way to do that.

August 4, 2014 | Unregistered CommenterJoshua

==> "...but in failing to recognise that the climate believers have just tried out all these techniques on us and it didn't work."

And here I see evidence of what I described above: 'Us" versus "them" (the "climate believers"). Stark evidence that for you too, this is about identification and labeling.

==> "The fundamental point everyone misses when they try to use this as a tool for persuasion is that it is symmetrical. The problems in reasoning it identifies applies to both sides, and the proposed solutions can be used equally easily by both sides. And everybody thinks 'This sort of cognitive bias explains how the other side can think as they do, but obviously it doesn't apply to the position taken by my side.' Everybody thinks 'Oh, those methods obviously wouldn't work on me, but I'm sure they'll work just fine on those less-sophisticated people over there.' It's the Republican Brain thesis all over again."

Agreed, and it is also the "climate alarmists" and "climate believers" and "deniers" and "anti-science" and "us" and "them" all over again.

==> "The truth is, nobody in the 'science communication' business really believes it can be true, because if they did they would have to abandon their own position first. They would have to accept that they cannot know for sure that any of their own beliefs and knowledge are true, because it could just be the result of motivated reasoning."

You make an inaccurate presumption there - one that flies in the face of evidence - that everyone in the 'science communication business' thinks that they can know for sure what will work.

==> "They can't rely on the opinions of scientists, because scientists are as human and as subject to motivated reasoning as anyone..."

That depends on what you mean by "rely." If you mean "have blind faith in" then you are right. But then you're speaking about only those who have such blind faith - and I think you'd be hard-pressed to find many of those. If you mean see that vastly shared opinions among scientists indicates something about probabilities, then I think that you are wrong

==> "So of course everyone tries to use it to dissolve their opponent's beliefs, ..."

It? Scientific belief? Putative wisdom about "science communication?" And "everyone?" Really?

==> "... to achieve some degree of certainty"

What does that mean: "some degree of certainty?" Seems to me like certainty doesn't exist in degrees. It's kind of a binary condition. I think that prevalence of scientific opinion can inform probabilities, respectively, with reference to two or more possibilities.

August 4, 2014 | Unregistered CommenterJoshua

"Stark evidence that for you too, this is about identification and labeling."

??

"First, he is speaking publicly and being videotaped. His language is a matter of public record."

Who are you talking about? Harris, or George Marshall?

"You make an inaccurate presumption there - one that flies in the face of evidence - that everyone in the 'science communication business' thinks that they can know for sure what will work."

You're making an inaccurate presumption that I was talking about the methods rather than the message.

"Second,unless you think that he's lying to the Heartland audience about his real beliefs, he is stating his belief."

Ummm, yeah? What else would you expect him to do? Do you think that when people call us 'deniers', that they don't really mean it? What's your point?

"Unless in all future attempts to communicate to those "alarmists" about "realism," he lies or hides his real beliefs he is showing what his strategy is."

Why would anyone need to hide it? His advice consists of 1) don't insult them, 2) use arguments likely to appeal to left-wing ideals such as the damage being done to the poor, and 3) research the philosophy of science to present the argument in terms of widely accepted scientific principles. If in future some random right-wing climate sceptic decides not to insult you and talks about, say, poor Africans being denied coal-powered electricity, are you going to accuse them of having seen Harris's presentation and cynically pretending to be polite as an evil tactic to better persuade you? Are you serious?

There's nothing in Harris's three points that requires him or anyone taking his advice to pretend to agree with the Left, or to pretend that they don't consider the alarm being raised about climate change to be unjustified. The entire point of arguing is to persuade people of exactly that claim, so it's not really going to be hidden anyway.

There's no need to hide the strategy itself - it's all good advice. What I suppose you could argue is being 'hidden' is the fact that a strategy is needed - that while we're being polite, in our heads we're secretly making up rude jokes about Al Gore. And should we be challenged on the point, honesty would compel us to admit that yes we were, and that furthermore we don't think a lot of the Left or their ideas. But you already knew that. The Left have always been pretty clear about what they think of the Right.

However, unless you're arguing that there shouldn't *be* any tribes, I'm not sure what you're proposing we do about it. The idea of SOSC is to avoid using language and ideas that invoke tribal identities. Given that those tribal identities exist, that necessarily means 'hiding' them when talking to people from other tribes - by which I mean not mentioning them or emphasising them, rather than claiming that they're not so. How else could you do it?

"Once again, NiV, even though I believe that you believe that the biasing influence of motivated reasoning is symmetrical, you will often seek to rationalize motivated reasoning on the part of "skeptics" in a transparent way to explain their actions - which are rooted in motivated reasoning - as logical or "skeptical.""

And I believe you'll always try to cast them as illogical or irrational. Your motivated reasoning motivates you to find fault with them.

In this case, I was actually faulting Harris's reasoning, comparing him to "all the people talking about how to win climate 'deniers' over" and explaining how it was irrational to expect these methods to work on the Left when they'd already been tried and failed on the Right. But despite devoting several paragraphs to explaining why Harris was wrong, you still seem to be reading it as support for his argument. What topical psychological phenomenon could possibly explain that...? :-)

"If you mean see that vastly shared opinions among scientists indicates something about probabilities, then I think that you are wrong"

"In questions of science, the authority of a thousand is not worth the humble reasoning of a single individual."

First, we don't actually know what opinions are "vastly shared" amongst scientists - for such a commonly cited statistic there's remarkably little survey work to address that actual question. What surveys do exist suggest that it's relatively contentious outside a very small core. And secondly, what it indicates depends on what evidence those opinions are based on. If the vast majority of scientists have downloaded the data and checked the maths/physics for themselves, and it's clear that any faults found would have been flagged up and corrected, then it might indeed have some value. If the vast majority of scientists are just repeating what they read in the newspapers and government press releases, then what is that consensus worth?

It was Tom Wigley who said "No scientist who wishes to maintain respect in the community should ever endorse any statement unless they have examined the issue fully themselves." Faith is blind when they haven't.

August 4, 2014 | Unregistered CommenterNiV

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>