follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« "The qualified immunity bar is not set that low..." | Main | "Another country heard from": a German take on cultural cognition »
Saturday
Apr202013

Still more Q & A on "cultural cognition scales" -- and on the theory behind & implications of them

I was starting to formulate a contribution to some of the great points made in discussion of the post on Q&A on "cultural cogniton" scales" & figured I might as well post the response. I encourage others to read the comments--you'll definitely learn more from them than from what I'm saying here, but maybe a marginal bit more still if you read my contribution in addition to those reflections. And almost certainly more still if others are moved by what I have to say here to refine and extend the arguments that were being presented in there.  Likely too it would make sense for the discussion to continue in comments to this post, if there is interest in continuing.

1. Whence predispositions, and the revision of them

How does this theory then explain the change from one group identity to another? You don't argue that such change doesn't occur, I see, since you say that there's "no reason why individuals can't shift & change w/ respect to them" -- but why isn't there such a reason, since you've given a good phenomenological description of the group pressures brought to bear on individuals to keep them in the herd, so to speak?

I don't really know how people form or why they change the sorts of affinity-group commitments that will result in sorts of dispositions we can measure w/ the cultural worldview scales.  My guess is that the answer is the same as one that one would give about why people form & change the sorts of orientations that are connected to religious identifications & ideological or political ones: social influences of various sorts, most importantly family & immediate community growing up; some possibility of realignment upon exposure at an impressionable period of life (more typically college age than adolescence or earlier) to new perspectives & new, compelling sources of affinity; thereafter usually nothing of interest, & lots of noise, but maybe some traumatic life experience etc.

Question I'd put back is: why is this important given what I am trying to do? I want to explain, predict, and formulate constructive prescriptions relating to conflict over science relevant to individual & collective decisionmaking. Knowing that the predispositions in question are important to that means it is important to be able to measure them.  But it doesn't mean, necessarily, that I need a good account of whence the predispositions, or of change -- so long as I can be confident (as I am) that they are relatively stable across the population. 

I suppose someone could say, "you should have a theory of the “whence & reformation of” predispositions b/c you might then be able to identify strategies for shaping them as a means of averting conflict/confusion over science" etc.  But I find that proposition (a) implausible (I think I know enough to know that regulating formation of such affinities is probably not genuinely feasible) & more importantly (to me) (b) a moral/political nonstarter: in a liberal society, it is not appropriate to make formation of people's values & self-defining affinities a conscious object of govt action.  On the contrary, it is one of the major aims of the "political science of democracy" (in Tocqueville's sense) to figure out how to make it possible for a community of diverse citizens to realize their common interest in knowing what's known without interfering with their diversity.

2. On change in how groups with particular predispositions engage or assess risks

And a related question would be: how do the group perceptions of risk themselves change over time? Ruling out mystical or telepathic bonds between group members, how does a change get started, who starts it, and how or where do those starters derive their perception of risk? (Consider, e.g., nuclear power.)

There is an account of this in "the theory." 

The "cultural cognition thesis" says that "culture is prior" -- cognitively speaking --" to facts."  That is, individuals can be expected to engage information in a manner that conforms understanding of facts to conclusions the cultural meanings of which are affirming to their cultural identities. 

So when a putative risk source -- say, climate change or guns or HPV or nuclear  or cigarettes-- becomes infused with antagonistic meanings, “pouring more information” on the conflagration won’t staunch it; it will likely only enflame

Instead, one must do something that alters the meanings, so that positions are no longer seen as uniquely tied to cultural identities.  At that point, people will not face the same psychic pressure that can induce them (all the more so when they are disposed to engage in analytical, reflective engagement with information!) to reject scientific evidence on any position in a closed-minded fashion.

Will groups change their minds, then? Likely someone will; or really, likely there will be convergence among persons with diverse views, since like all members of a liberal market society they share faculties for reliably recognizing the best available scientific evidence, and at that point those faculties no longer will be distorted or disabled by the sort of noise or pollution created by antagonistic cultural meanings.

Examples? For ones in the world, consider discussions (of cigarettes, of abortion in France, of air pollution in US, etc.) in these papers:

The Cognitively Illiberal State, 60 Stan. L. Rev. 115 (2007)

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk, 119 Harv. L. Rev.1071 (2006) (with Paul Slovic, John Gastil & Donald Braman)

Cultural Cognition and Public Policy, 24 Yale L. & Pol'y Rev. 149 (2006) (with Donald Braman)

For an experimental “model” of this process, see our paper on geoengineering & the “two-channel” science communication strategy:

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

And for more still on how knowing why there is cultural conflict can help to fashion strategies that dispel sources of conflict & enable convergence, see

Is cultural cognition a bummer? Part 1

3.  What about the “objective reality of risk” as opposed to the cultural cognition of it?

These questions themselves derive from a sense I have that the group-identity theory of risk perception is not wrong but incomplete, and the area in which it's incomplete is of major importance in addressing any theory of communication to do with risk -- that area is the objective reality of risk, as determined not by group adherence, and not by authority (even the authority of a science establishment), but rather by evidence and reason.

To start, of course the theory is “incomplete”; anyone who thinks that any theory ever is “complete” misunderstands science’s way of knowing! Also misunderstands something much more mundane—the limited ambition of what the ‘cultural cognition’ framework aspires to, which is a more edifying and empowering understanding of the “science communication problem,” which I think one can have w/o having much to say about many things of importance.

But the “theory” as it is does have a position, or least an attitude, about the “reality” of the knowledge confusion over which is the focus of the “science communication problem.”  The essence of the attitude comes down to this:

a. Science’s way of knowing—which treats as entitled to assent (and even that only provisionally) conclusions based on valid inference from valid empirical observation—is the only valid way to know the sorts of things that admit of this form of inquiry. (The idea that things that don’t admit of this form of inquiry can’t be addressed in a meaningful way at all is an entirely different claim and certainly not anything that is necessary for treating science’s way of knowing as authoritative within the domain of the empirically observable; personally, I find the claim annoyingly scholastic, and the people who make it simply annoying.)

b. People, individually & collectively, will be better off if they rely on the best available scientific evidence to guide decisions that depend on empirical assumptions or premises relating to how the world (including the social world) works.

c. In the US & other liberal democratic market societies—the imperfect instantiations of the Liberal Republic of Science as a political regime—people of all cultural outlooks in fact accept that science’s way of knowing is authoriative in this sense & also very much want to be guided by it in the way just specified.

d. Those who accept the authority of science & who want to be guided by it will necessarily have to accept as known by science much much more than they could ever hope to comprehend in a meaningful sense themselves. Thus their prospects for achieving their ends in these regards depends on their forming a reliable ability to recognize what’s known to science.  The citizens of the Liberal Republic of Science have indeed developed this faculty (and it is one that is very much a faculty that consists in the exercise of reason; it is an indispensable element of “rationality” to be able reliably to recognize who knows what about what).

e. The process of cultural cogniton, far from being a bias, is part of the recognition faculty that diverse individuals use reliably to recognize what is known by science.

f. The “science communication problem” is a consequence of conditions that disable the reliable exercise of this faculty.  Those conditions involve the entanglement of empirical propositions with antagonistic cultural meanings – a state that interferes with the normal convergence of the members of culturally diverse citizens of the Liberal Republic of Science on what is known to science.

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (5)

"a. Science’s way of knowing—which treats as entitled to assent (and even that only provisionally) conclusions based on valid inference from valid empirical observation—is the only valid way to know the sorts of things that admit of this form of inquiry."

Yes, but it needs more than that.

The problem is how to decide what constitutes valid inference, using an information processor (the human brain) that is itself not guaranteed to proceed according to correct/valid reasoning. One that in fact fairly certainly does not, given that different people come to different conclusions on the same data. They can't all be right. How can you tell whether you are rational, using reasoning processes that might not be rational? A madman thinks they're the only one that's sane.

The answer science came up with is sceptical challenge. One person puts forward what they consider to be valid reasoning, and other people try to pick holes in it. They check the calculations, repeat the experiments, look for gaps in the chain of logic, possible ways it might have gone wrong. They project implications and extreme/boundary cases and check them for consistency with past observations. They make predictions and check them against future observations. They systematically seek out alternative explanations. They document assumptions made and the range of circumstances explored. And then they do it again, and again. No conclusion or theory is beyond challenge. If any apparent discrepancy is noted, it must be chased down until it is known where the error lies - with the one checking it or the one checked. And then the error is eliminated. It is Science's immune system.

It works because people have different flaws, and different motivations, so what one person misses another will spot. Science depends on scepticism.

"The idea that things that don’t admit of this form of inquiry can’t be addressed in a meaningful way at all is an entirely different claim"

I'd guess you're alluding to religion, but the claim can be applied more generally, and is obviously wrong. People made great progress before science, and non-scientists can certainly enquire meaningfully. The difference is that they not only use valid logic but also a wide range of heuristics - which while not entirely reliable, are in their way far more powerful. You can get an answer that is mostly right, that is probably right, but far faster and with less effort. On most of the simple problems people face in their daily lives, it works fine.

But as the length of the chain of logic required extends, the probability of error accumulates. After two or three steps the conclusions are still reasonable, but after twenty or thirty steps, or two or three hundred, the odds of not having made an error become vanishingly small, unless each step is known with extremely high confidence. This is why science takes obsessive care about precision, accuracy, and details. This is why it insists on the highest quality, the most rigorous logic, the most extensive checks and calibration.

However, for shorter chains sloppier methods work sufficiently well. "Correlation implies causation" is a fallacy - but it's true often enough to be useful. The same for ad hominem, argument from authority, argument from silence, appeal to ignorance, the fallacies of composition and division, and so on. That's why human reasoning evolved to use them. In everyday life everybody uses them all the time.

"b. People, individually & collectively, will be better off if they rely on the best available scientific evidence"

But the problem is: how do they tell what that is?

"c. In the US & other liberal democratic market societies—the imperfect instantiations of the Liberal Republic of Science as a political regime—people of all cultural outlooks in fact accept that science’s way of knowing is authoriative in this sense"

"Authoritative" in what sense? We've had this argument before.

"d. Those who accept the authority of science & who want to be guided by it will necessarily have to accept as known by science much much more than they could ever hope to comprehend in a meaningful sense themselves. Thus their prospects for achieving their ends in these regards depends on their forming a reliable ability to recognize what’s known to science."

Yes, so long as it is understood that that is not science, and does not itself carry its authority. Blindly trusting authority without understanding is an effective heuristic, but it is not valid inference, and no substitute for understanding it oneself.

It is paradoxical, in that it only works so long as a lot of people don't use it. It can be trusted only because it is not trusted, and has therefore been repeatedly checked. The building contains no burglars because it is full of guards on patrol, but it does not follow that because there are no burglars the guards are unnecessary. Take the guards away, and see what happens.

It may be that those who want to be guided by science must accept that this is an impossible aspiration. We are not yet capable of doing science - only some approximation to it. The universe is not required to bend to our wishes.

And yet, although we cannot achieve perfection, that doesn't mean that we shouldn't try to get as close to it as we can. It doesn't mean that we should tolerate unnecessarily low standards, or that we should tolerate error. It's one thing not to check a well-established result against which nothing has been said. It's another thing completely to dismiss known anomalies and inconsistencies and flawed logic that cast doubt on a theory. It is circular to dismiss them on the grounds that if there was anything to the anomalies we would already know about it.

"Meanwhile, some more nullius in verba -- one of your favorite themes, on which I'm slowly, imperceptibly (of course, imperceptibly; that is an essential part of my strategy) moving you over to my position"

:-)

I shall be interested to see if the experiment succeeds!

In the meantime, thank you for another excuse to go on a philosophical digression. :-)

April 20, 2013 | Unregistered CommenterNiV

Great post, Dan, incuding especially the blizzard of links and papers that I haven't had time to digest but which look not just interesting in themselves, but likely to answer my further questions and arguments. Fool that I am, though, I'll rush in.

First, my point in saying that the theory may be incomplete wasn't to insist in some picky way on a completeness that, as you rightly point out, is both impossible and irrelevant -- rather, it was to suggest that the theory lacks something important to its own purpose and functioning -- i.e., important precisely for the "science communication problem". This lack would be the influence of reason and evidence on cultural cognition groups, that operates across quadrants and, in that sense, acts as a counter to the pressure of group loyalty that the theory embodies.

So that was really the background to my questions regarding how change occurs, either at the individual or cognition group level. I'm with you completely in the respect you accord to citizen diversity, particularly of course the diversity of cultural "predispositions". But -- and I think you'd perhaps agree with this -- I think those predispositions can and do change, on both individual and group levels, under the impact of evidence and reason, and that's the factor I'm interested in (as distinct from individual trauma, random experiences, etc.).

Which brings me to your point 3:

I largely agree with you through a, b, and c.

Point d is a little more controversial in that you have sometimes seemed to reject the Royal Society motto on what seems to me to be an overly literal reading of it. Of course we have to take others' word for it in the vast majority of situations, not just re: science, or we'd be paralyzed or worse. The point of the motto, however, is surely just not to accept an appeal to authority in matters that are controversial -- good advice not just for scientists themselves, but for citizens of what you call the Liberal Republic of Science as well, in matters of controversy involving political or economic policies, and moral or cultural values. The more technical the issue, the more difficult this can become, but, as you say, "it is an indispensable element of “rationality” to be able reliably to recognize who knows what about what." This recognition necessarily involves heuristics, as NiV points out, and as I'd agree -- such heuristics may be "shortcuts" as one of your links claims, but such shortcuts are necessary alternatives to either acquiring a career's expertise in a wide variety of fields, or passively taking the word of someone with a PhD and a labcoat but with questionable policy expertise and divergent values. One of my heuristics, for example, is to watch for emotional appeals that tend to support the expert's own likely predispositions.

Points e and f are what I primarily disagree with, or maybe just don't understand. You say that the problem lies in "the entanglement of empirical propositions with antagonistic cultural meanings", and imply, as I read it, that the solution to the problem lies in disentangling the two. My sense is that it would be nice if it were that simple, but that, alas, the two are often enough inherently entangled, because empirical propostions bear upon cultural meanings. And this "bearing upon" cuts both ways -- propositions are not true merely because they're empirical, and so cultural meanings can be the motivation behind making various sorts of empirical assertions, as we've seen frequently. It would be nice, certainly, if we had some unimpeachable source of truth in the real world, not just the ideal world of ideologically/culturally pure or at least compartmentalized scientists -- but, in my view, though probably not yours, we don't, and scientists too, notwithstanding their "professional norms", will often leak their cultural meanings into their empirical propositions.

This, though, is where I think that introducing some notion of change, on both individual and group levels, might aid the theory and strengthen the model that could then be used to improve communication modes. To the extent that we can disentangle cultural meanings and empirical propositions, leaving cultural predispositions unchanged, we should do so. But when that's not feasible, we should look at ways of promoting change, even on the cultural cognition level, that can leave the predispositions intact, but changed sufficiently to now bring empirical propositions and cultural meanings into better agreement, a change that, as I say, can cut in both directions. I'm not saying I know how to do that, but I think looking at adding notions of reason-and-evidence into the fabric of the model of cultural cognition of risk might be a start.

With apologies for the length.

April 20, 2013 | Unregistered CommenterLarry

@NiV:

For us to make progress in our discussion here, we must clarify some things Or really, we have to get past caricatures of each other's positions.

I will tell try to correct your caricature of my undersanding in a way that I suspect you will still see as a carricature of yours!

1. What you see as "perfection"-- that an individual should accept as known only what he or she determines for him- or herself using science's distinctive mode of attaining knowledge-- I see as a kind of fable. I don't mean to mock it (it being nullius in verba); but I mean to treat it for what it is -- an uplifting expression of sentimentality for the citizens of the Liberal Republic of Science, a kind of patriotic anthem or whatnot.

2. What you see as a kind of deficiency in reason or misadventure that we should try to overcome -- viz., that we do in fact acquire knowledge by trusting in the authority of others who are in a position to give an accurate report of what has been determined to be known in the way science accepts-- I see as integral to reason, as something w/o which science would be impossible.

Are you able to see how I could say these things? About why I believe my positions should be seen as perfectly obvious & in no way subversive of acceptance of science's way of knowing?

Are you able to help me, then, why my view of you must be a caricature?

Or is there another citizen of the LRC who can help the both of us out?

April 20, 2013 | Registered CommenterDan Kahan

@Larrry:
Don't apologize, to me at least, about length.
Now I must reflect, though, on what you say, since maybe you have (in your cross-comment) arrived to help me & NiV out in the way I hoped someone might.

April 20, 2013 | Registered CommenterDan Kahan

" that an individual should accept as known only what he or she determines for him- or herself using science's distinctive mode of attaining knowledge"

As I said in my first comment in the first link, this is a sort of caricature of what nullius in verba is intended to mean.

The point of the motto is not that everything must be re-examined, but that everything can be re-examined. There are no statements anywhere that cannot be challenged, and when challenged you cannot simply take someone's word for it that it is true. You have to check.

Because what 'Nullius in Verba' is really about is the insight that established/traditional knowledge can be wrong, and this is a safeguard against the slow corruption of our body of knowledge. It is the immune system of science constantly searching for infections and destroying them. Most of the time, you can accept standard results fairly safely precisely because they have been combed over by this mechanism so many times before. It is the reason and mechanism by which belief in science is rationally justified. But you must never forget that this is a pragmatic compromise with scientific principle, one that is technically illegitimate, and there is no result so standard that it would not have to be re-examined again from scratch if plausibly challenged.

Yes, it is, or should be, an inspiring fable. But it is also the active ingredient, the functional component, the thing that makes science uniquely capable. Using authority in science is like the barman putting water in the whisky. It enables it to go further, and if you only use a little nobody will likely notice, and you can tell people that "everybody does it", but water is not whisky, and authority is not science.

--

"viz., that we do in fact acquire knowledge by trusting in the authority of others who are in a position to give an accurate report of what has been determined to be known in the way science accepts"

OK, I'll give you an example of something said by someone in a position to know what has been determined to be known in the way science accepts. I'd like to know if you trust it, or whether you think it should be checked.

"Here, the expected 1990-2003 period is MISSING - so the correlations aren't so hot! Yet the WMO codes and station names /locations are identical (or close). What the hell is supposed to happen here? Oh yeah - there is no 'supposed', I can make it up. So I have :-)

If an update station matches a 'master' station by WMO code, but the data is unpalatably inconsistent, the operator is given three choices:

<BEGIN QUOTE>
You have failed a match despite the WMO codes matching.
This must be resolved!! Please choose one:

1. Match them after all.
2. Leave the existing station alone, and discard the update.
3. Give existing station a false code, and make the update the new WMO station.

Enter 1,2 or 3:
<END QUOTE>

You can't imagine what this has cost me - to actually allow the operator to assign false WMO codes!! But what else is there in such situations? Especially when dealing with a 'Master' database of dubious provenance (which, er, they all are and always will be).

False codes will be obtained by multiplying the legitimate code (5 digits) by 100, then adding 1 at a time until a number is found with no matches in the database. THIS IS NOT PERFECT but as there is no central repository for WMO codes - especially made-up ones - we'll have to chance duplicating one that's present in one of the other databases. In any case, anyone comparing WMO codes between databases - something I've studiously avoided doing except for tmin/tmax where I had to - will be treating the false codes with suspicion anyway. Hopefully.

Of course, option 3 cannot be offered for CLIMAT bulletins, there being no metadata with which to form a new station.

This still meant an awful lot of encounters with naughty Master stations, when really I suspect nobody else gives a hoot about. So with a somewhat cynical shrug, I added the nuclear option - to match every WMO possible, and turn the rest into new stations (er, CLIMAT excepted). In other words, what CRU usually do. It will allow bad databases to pass unnoticed, and good databases to become bad, but I really don't think people care enough to fix 'em, and it's the main reason the project is nearly a year late."

This is a climate scientist's lab notes describing his attempts to figure out and fix a climate database that has already passed peer-review, and is used in the IPCC assessments. This is the guy at the publishing institution currently responsible for producing it. There is no person better placed to tell you what has been determined to be known in the way science accepts. The paper hasn't been withdrawn. The database is still published on the internet. The IPCC have issued no correction. It has authority, backed by the credibility of the entire scientific establishment.

So who do you trust? The guy actually doing the work writing in the privacy of his lab notes, or the world-wide scientific community who have said there's nothing wrong, this is still what science accepts, and you can (and should) trust it? Or the world media, and our social networks of experts and bloggers and journalists and campaigners who will also tell us on which side the authority of science is vested?

What is your position here on what we should do? If we don't know about Harry's notes, should we trust the institution and the journal and the scientific community when they say this database is fine, or should we insist that we ought to be able to check it ourselves if we want to? If we don't have a reason to suspect this is going on should we assume it's not? Is this sort of thing merely the acceptable price we pay for progress? Now that we know, should we trust one or other of them, or should we say that the only way to know is to look for ourselves?

This is not about whether climate science is right or wrong. (That we have two versions here is probably not the result of SCE pollution.) I'm asking what protocol a typical man on the street should apply to determine who knows what about what science knows. And, if different, what protocol a scientist working in another field - one with the expertise to understand and who has been asked to back a position with their scientific reputation - should follow. Who do you trust? How should you decide?

Because I don't see how you can. The only way to be sure is to look for yourself.

--

"Are you able to see how I could say these things? About why I believe my positions should be seen as perfectly obvious & in no way subversive of acceptance of science's way of knowing?"

I can certainly see ways you could think that - but I'm not confident that the ways I see are what you're actually thinking. I suspect it is a matter of narratives. I think you have a particular narrative in mind - the careful scientist works night and day for many years to invent transistors, the scientific community nod sagely at his careful methods, the people buy the transistors and benefit hugely. But I think this is a building with no burglars in it. I have a different narrative in mind - one that includes careful scientists inventing transistors, but that also could contain less careful ones.

April 21, 2013 | Unregistered CommenterNiV

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>