follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Unconfounding knowledge from cultural identity--as big a challenge for measuring the climate-science literacy of middle schoolers as grown ups | Main | What I believe about teaching "belief in" evolution & climate change »

New paper: "Laws of cognition, cognition of law"

This teeny weeny paper is for a special issue of the journal Cognition. The little diagrams illustrating how one or another cognitive dynamic can be understood in relation to a simple Bayesian information-processing model are best part, I think; I am almost as obsessed with constructing these as I am with generating the multi-colored "Industrial Strength Risk Perception Measure" scatterplots.

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (7)

The thing that assigns likelihood ratios to different hypotheses is what I would call the 'statistical model', and it is indeed a matter of choice - either by a priori assumption or by it's own separate hypothesis-evidence update process.

The 'narrative templates' sounds more like what I'd call the 'choice of hypotheses', which again can be done in different ways, the choice of which can affect the outcome. Sometimes a particular context will suggest adding hypotheses that one wouldn't normally include. Sometimes it will suggest a particular division of cases. And while these can technically be represented using priors - we don't think of it that way. We pick a set of likely hypotheses that we then approximate as being exhaustive, to simplify the process. Near certainties we round up to certainties. and so on.

While one can argue about whether a choice of hypotheses should be explicitly represented in the Bayesian framework or treated as a set of priors on the union of all possibilities, I do think that the statistical model definitely ought to be represented explicitly.

October 27, 2014 | Unregistered CommenterNiV


Well, real people rarely "explicitly represent" their "statistical models." Or maybe a better way to put it, real people just are implicit statistical models, and the interesting thing is to try to parameterize them...

I have found, as a descriptive matter, that trying to tease out the influences that, in conceptual terms, alter the "likelihood ratio" associated with information is a profitable way to test empirically how people form perceptions of risk & similar facts.

October 28, 2014 | Registered CommenterDan Kahan

"Well, real people rarely "explicitly represent" their "statistical models.""

Oh, I agree. They rarely explicitly represent their priors, or their likelihood ratios either.

Bayesians, on the other had, do.

"Or maybe a better way to put it, real people just are implicit statistical models, and the interesting thing is to try to parameterize them..."

I assumed that with your diagrams you was trying to draw an analogy between Bayesian formalism (priors and likelihood ratios) and the way people think (beliefs and evidence). The analogy is not exact - there have been many demonstrations that human expertise is not consistent with Bayesian Belief - but to the extent that it works, the element that generates likelihood ratios from hypotheses - corresponding to that mental model of the world by which we predict the likely consequences of the different hypotheses - is called the statistical model in the formal Bayesian analysis. I thought since you was labelling the other modules with their corresponding Bayesian terms, you might like to label that one too.

Its significance is, unfortunately, often ignored. There are usually several choices that can be made, or some uncertainty about which is proper, but people without a deep understanding will often assume one implicitly without realising it, and then wonder why other people come to different conclusions on the same evidence. There have, for instance, been several famous examples in climate science. So I'm always happy to see someone recognise that there is more to Bayes than "posterior equals prior times likelihood ratio", giving what many seem to regard as a mathematically inescapable and unarguable conclusion. Even with the same priors and the same evidence, there is still room for disagreement.

October 28, 2014 | Unregistered CommenterNiV


Thanks. I will indeed consider revisions along these lines.

You are right of course that there is myriad evidence that human deicisonmaking is not Bayesian. I find simple Bayesian framework to be useful heuristic-- start with that & then explicate how any particular mechanism of cognition relates to it, so that we can be clear about its operation & significance. In the course of this, too, I think we find that although many mechanisms featured in cognitive science relate to defects in the capacity to process information in Bayesian terms, many more involve the impact of information & other influences in shaping inputs into Bayesian processing.

Obviously, too, those influences can be normatively appraised. If motivated reasoning is best understood as the impact that some goal or interest external to truth-seeking has on the likelihood ratio assigned new information, then in many (but probably not all) contexts this will be normatively undesirable.

October 30, 2014 | Registered CommenterDan Kahan

@NiV: What source would you suggest for hat refers to "likelihood ratios" as part of "the statistical model" that is presupposed by or outside of Bayesian inference? I have to admit that I've not seen decision theorists giving much thought to "where do likelihood ratios come from?" *Priors* -- sure b/c all one needs to say is, who cares about them? so long as we have access to enough information & update properly, we'll get where we'll converge on best estimate regardless of where we started.

But of course, that assumes we assign proper likelihood ratio to new information. (Of course, what likelihood ratio to assign that information might be somethign that someone else is investigating in a process that is itself appropriately Bayesian -- but somewhere this must end)

November 4, 2014 | Registered CommenterDan Kahan


I'm not sure what the origin of the terminology is - you're right that it's not ordinarily defined or discussed explicitly, but rather taken for granted.

One reference to the general Bayesian inference approach that discusses models extensively is Burnham, K.P., and Anderson, D.R. (2002). Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach, 2nd ed. Springer-Verlag. ISBN 0-387-95364-7.

For example:

"The likelihood function L(θ|x,model) makes it clear that for inference about θ, data and the model are taken as given. Before one can compute the likelihood that θ = 5.3, one must have data and a particular statistical model."

I'm told a lot of other people have referenced this book as an authoritative source for statistical models, but I wouldn't want to say there isn't a better one. (It may be worth noting that the book often calls it a "probability model" as well.)

The comparison of likelihoods in the Bayesian updating process is effectively a 'likelihood ratio test' - you might find more if you check some references to that.

November 6, 2014 | Unregistered CommenterNiV

@NiV-- many thanks!

November 7, 2014 | Registered CommenterDan Kahan

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>