follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Clarendon Law Lectures 2017: what happened | Main | Where am I?... Part 2 »

Weekend update: paradox of scientific knowledge dissemination in the liberal state

From The Cognitively Illiberal State, an early formulation of Popper's Revenge:

A popular theme in the history and philosophy of science treats the advancement of human knowledge as conjoined to the adoption of liberal democratic institutions. It is through incessant exposure to challenge that facts establish themselves as worthy of belief under the scientific method. Liberal institutions secure the climate in which such constant challenging is most likely to take place, both by formally protecting the right of persons to espouse views at odds with dominant systems of belief and by informally habituating us to expect, tolerate, and even reward dissent.

But at the same time that liberalism advances science, it also ironically constrains it. The many truths that science has discovered depend on culture for their dissemination: without culture to identify which information purveyors are worthy of trust, we’d be powerless to avail ourselves of the vast stores of empirical knowledge that we did not personally participate in developing. But thanks to liberalism, we don’t all use the same culture to help us figure out what or whom to believe. Our society features a plurality of cultural styles, and hence a plurality of cultural certifiers of credible information

Again, the belief that science will inevitably pull these cultural authorities into agreement with themselves reflects unwarranted optimism. In accord with its own professional norms and in harmony with the social norms of a liberal regime, the academy tolerates and even encourages competitive dissent. As a result, cultural advocates will always be able to find support from seemingly qualified experts for their perception that what’s ignoble is also dangerous, and what’s noble benign.  States of persistent group polarization are thus inevitable— almost mathematically —as beliefs feed on themselves within cultural groups, whose members stubbornly dismiss as unworthy insights originating outside the group.

Because we have the advantage of science, we undoubtedly know more than previous ages about what actions to take to attain our collective wellbeing. But precisely because we tolerate more cultural diversity than they did, we are also confronted with unprecedented societal dissensus on exactly what to do. 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (13)

"without culture to identify which information purveyors are worthy of trust, we’d be powerless to avail ourselves of the vast stores of empirical knowledge that we did not personally participate in developing."

It's not a question of identifying purveyors. That's just retreating to Argumentum ad Verecundiam again - precisely the fallacy that Science and the Enlightenment supposedly got rid of. It's a matter of identifying which claims have been checked and challenged, and how thoroughly.

"But precisely because we tolerate more cultural diversity than they did, we are also confronted with unprecedented societal dissensus on exactly what to do."

Only on a tiny number of shibboleth issues.

November 19, 2017 | Unregistered CommenterNiV


read this...

November 19, 2017 | Unregistered Commenterdmk38

Yes, the fact that we still tend to trust much more those to whom we are culturally aligned, is a major issue regarding science that for whatever reason comes to major social attention (much science doesn't, as you note in tragedy of the commons text). But disseminating science does not 'depend on culture'. Assuming that it does and constantly trying, as so many do, to find more / more intimate modes of cultural transmission, considerably exacerbates the problem. The answer is to take our foot off this pedal, not press harder. This means that dissemination will be slower, more subtle, yet also beneath the cultural radar and far surer across all cultural groups. As you note with your HPV / HBV example, where cultural underwriting is much less, penetration of all groups is much more.

It is also the case that the world supports much less cultural diversity than it once did, by virtue of massively more interaction globally between cultures. Groups we perceive as strongly divergent, for instance Lib/Dems and Rep/Cons, have far more in common than each of them does with a whole raft of historic cultures, to take an example the Spartans. This does not stop people from feeling identity threatened as the world pool shrinks and the fish are pressed closer together, with smaller differences emphasized more. I merely mention this as the background that cultures are converging, not diverging. It does suggest that exposure to historical cultures would be helpful in establishing context and perhaps thereby reducing conflict. But if cultural modes have hold of the education system (maybe different chunks for different cultures), the context may not make it through their filters.

Nor is it necessarily the case that in practice (strongly) liberal regimes support more diversity. A good rule of thumb regarding the central narrative of any culture, is that it is untrue (religions provide the most obvious example, but the same mechanisms support all cultures). It is merely a means to maintain a consensus, it does not reflect reality. Hence one should suspect this tennet, and looking at the current battleground of Uni campuses for instance, reveals relatively shallow flags of diversity (e.g. color, gender), but not the true depth (i.e. the most important diversity by far, that of ideas), and just like for any other culture, a strong policing that everyone should think and act similarly. Of course (strongly) conservative narratives are about forging consensus too and have their own similar flags of convenience. So in thinking about how science may be better employed to humanity's advantage, it is probably helpful to put aside connection with all cultures, including for instance the assumption of a beneficial connection between science and liberalism. That doesn't mean we shouldn't think about cultures of course, but in the objective.

The cultural dilemma goes deeper, because the lesson of evolution is that diversity (biological and cultural) leads to greater success. However the diversity is maintained largely by competitive modes, which at group level are between units bound by ever increasing altruism (for humans, cultural units). Cultural symbols determine who is in-group or out-group. Similar rules apply in economics, and it seems an optimum balance of managed co-opetition is likely the answer*. However, maintaining this in society by reason rather than by cultural instinct, would be an incredibly hard trick to achieve, and could go very badly wrong due to our lack of knowledge, as have attempts at other 'reasoned' schemes to govern society. (*Cons lean a little more to competition than co-operation, Dems vice-versa, so their alternating influence / ascendancy dynamically balances the line of co-opetition, yet not by direct reason, though of course it is the case that democracy is a first stage reasoning system that allows cultural dynamics to proceed with less war or other full conflict, like revolution, coups etc. Older co-opetive systems include coalitions like the nations of the UK).

November 19, 2017 | Unregistered CommenterAndy West

link drop:

November 20, 2017 | Unregistered CommenterJonathan

another link drop:

November 20, 2017 | Unregistered CommenterJonathan


Thanks. I like Popper.

November 20, 2017 | Unregistered CommenterNiV

@NiV--you won't stop liking him, I'm sure, but you'll likely have an issue w/ the role that he assigns "tradition" in facilitating recognition of genuine insights from science

November 20, 2017 | Registered CommenterDan Kahan

"@NiV--you won't stop liking him, I'm sure, but you'll likely have an issue w/ the role that he assigns "tradition" in facilitating recognition of genuine insights from science"

I will?!

The question about the sources of our knowledge can be replaced in a similar way. It has always been asked in the spirit of: "What are the best sources of our knowledge--the most reliable ones, those which will not lead us into error, and those to which we can and must turn,in case of doubt, as the last court of appeal?"

On the Sources of Knowledge and of Ignorance I propose to assume, instead, that no such ideal sources exist--no more than ideal rulers--and that all "sources" are liable to lead us into error at times. And I propose to replace, therefore, the question of the sources of our knowledge by the entirely different question: "How can we hope to detect and eliminate error?"

The question of the sources of our knowledge, like so many authoritarian questions, is a genetic one. It asks for the origin of our knowledge, in the belief that knowledge may legitimise itself by its pedigree. The nobility of the racially pure knowledge, the untainted knowledge, the knowledge which derives from the highest authority, if possible from God: these are the (often unconscious) metaphysical ideas behind the question. My modified question, "How can we hope to detect error?" may be said to derive from the view that such pure, untainted and certain sources do not exist, and that questions of origin or of purity should not be confounded with questions of validity, or of truth.

What bit of that do you think I won't like?

"It has always been asked in the spirit of: "What are the best sources of our knowledge--the most reliable ones, those which will not lead us into error, and those to which we can and must turn in case of doubt, as the last court of appeal?" On the Sources of Knowledge and of Ignorance I propose to assume, instead, that no such ideal sources exist" versus "It's not a question of identifying purveyors."

"How can we hope to detect and eliminate error?" versus <I>"It's a matter of identifying which claims have been checked and challenged, and how thoroughly."

I'm guessing you had a different part of the essay in mind?

November 21, 2017 | Unregistered CommenterNiV

Hang on, I've just realised which bit you mean!

You're thinking of this bit:

4. Quantitatively and qualitatively by far the most important source of our knowledge--apart from inborn knowledge--is tradition. Most things we know we have learned by example, by being told, by reading books, by learning how to crlticise, how to take and accept criticism,how to respect truth.

5. The fact that most of the sources of our knowledge are traditional condemns anti-traditionalism as futile. But this fact must not be held to support a traditionalist attitude: every bit of our traditionalist knowledge (and even our inborn knowledge) is open to critical examination and may be overthrown. Nevertheless, without tradition, knowledge would be impossible.


In answer to that, I'll refer you to one of my earlier comments here.

Scientists (those that understand how the scientific method works) don't rely on criteria like prestige or character judgements. They use criteria related to how often and how thoroughly a result has been checked - preferably by hostile critics motivated to find flaws in it. Science gains credibility by surviving criticism - it's like evolution by natural selection that way. If you know that a particular theory - like the second law of thermodynamics - has come under determined and ingenious attack thousands of times and survived every one of them, you can rely on it as pretty solid. You can rely on college textbook derivations because you know thousands of students and lecturers have gone through the arguments in detail. You can rely on theories like special relativity, because you know that everyone who has ever come across it has initially spent many hours trying to find reasons why it's got to be wrong - not just for the glory of beating Einstein, but because it just looks so crazy!

Science deals with our cognitive blindspots and human fallibility by means of systematic scepticism. If a possible plausible counter-argument to any theory is found, or even suspected, the theory is considered 'disputed' and has to be checked before proceeding to use it. Likewise if a theory is new and has not been thoroughly tested yet. (Most academic journal papers are in this position.) Only after it has been thoroughly tested and no surviving counterarguments are known can it be 'trusted', and used by scientists without them having to personally verify the chain of reasoning supporting it themselves.

There is a huge amount of science that *has* been so tested. Scientists rely on institutional and social measures to know what parts this applies to. Unfortunately, the teaching of science by authority used in schools has permeated many of the college educated scientists too, who ought to have been since taught otherwise, and many professional scientists do extend their trust to other sources - investing their belief as you say in the prestige and authority of the source. Argument ad Verecundiam, as Locke called it. Some even go so far as to subscribe to consensus, as if science was subject to a popularity contest!

November 21, 2017 | Unregistered CommenterNiV


I happen to be reading something recently that mentions Daubert vs. Merrell Dow Pharma. Do you have an opinion on whether and how Daubert skews scientific evidence credibility? I've read both mild and apocalyptic opinions elsewhere.

November 22, 2017 | Unregistered CommenterJonathan

Net neutrality is another example of Popper's "sources of ignorance". The entire alt-right supports the FCC in its stance banning yet another Obama-era totalitarian plan to limit free dissemination of information:

"........All of the big Silicon Valley tech companies are 100 percent supportive of net neutrality. They’ve actually been the biggest beneficiaries of this policy because it ensures that Internet service providers can’t charge differently based on the traffic type. This means a provider like Verizon or Comcast can’t tell Facebook, Google or Netflix that they’re going to charge them more because of the traffic they push....... these companies aren’t supporting net neutrality because of some noble and just reason. They just don’t want to get charged more for Internet services."

November 24, 2017 | Unregistered CommenterEcoute Sauvage

The link as posted above must be revised since Daily Stormer lost its Hong Kong DNS - will update later if anybody is interested.
Meanwhile new excellent article in Nature or replication crisis:

And sample of abysmal statistical analysis from the left, proving a little learning is a dangerous thing:

November 29, 2017 | Unregistered CommenterEcoute Sauvage

sorry typo, Nature article is ON replication crisis

avid Parkins
PDF version

JEFF LEEK: Adjust for human cognition

To use statistics well, researchers must study how scientists analyse and interpret data and then apply that information to prevent cognitive mistakes.

In the past couple of decades, many fields have shifted from data sets with a dozen measurements to data sets with millions. Methods that were developed for a world with sparse and hard-to-collect information have been jury-rigged to handle bigger, more-diverse and more-complex data sets. No wonder the literature is now full of papers that use outdated statistics, misapply statistical tests and misinterpret results. The application of P values to determine whether an analysis is interesting is just one of the most visible of many shortcomings.

It’s not enough to blame a surfeit of data and a lack of training in analysis1. It’s also impractical to say that statistical metrics such as P values should not be used to make decisions. Sometimes a decision (editorial or funding, say) must be made, and clear guidelines are useful.

The root problem is that we know very little about how people analyse and process information. An illustrative exception is graphs. Experiments show that people struggle to compare angles in pie charts yet breeze through comparative lengths and heights in bar charts2. The move from pies to bars has brought better understanding.

We need to appreciate that data analysis is not purely computational and algorithmic — it is a human behaviour. In this case, the behaviour is made worse by training that was developed for a data-poor era. This framing will enable us to address practical problems. For instance, how do we reduce the number of choices an analyst has to make without missing key features in a data set? How do we help researchers to explore data without introducing bias?

The first step is to observe: what do people do now, and how do they report it? My colleagues and I are doing this and taking the next step: running controlled experiments on how people handle specific analytical challenges in our massive online open courses3.

We need more observational studies and randomized trials — more epidemiology on how people collect, manipulate, analyse, communicate and consume data. We can then use this evidence to improve training programmes for researchers and the public. As cheap, abundant and noisy data inundate analyses, this is our only hope for robust information.

BLAKELEY B. MCSHANE & ANDREW GELMAN: Abandon statistical significance

In many fields, decisions about whether to publish an empirical finding, pursue a line of research or enact a policy are considered only when results are ‘statistically significant’, defined as having a P value (or similar metric) that falls below some pre-specified threshold. This approach is called null hypothesis significance testing (NHST). It encourages researchers to investigate so many paths in their analyses that whatever appears in papers is an unrepresentative selection of the data.

November 30, 2017 | Unregistered CommenterEcoute Sauvage

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>