Weekend update: Is critical reasoning domain independent or domain specific?... a fragment of an incomplete rumination
An adaptation of a piece of correspondence--one no longer, really, than this-- w/ a thoughtful person who proposed that people have "corrective mechanisms" for the kind of "likelihood ratio cascade" that I identified with "coherence based reasoning" and that I asserted makes "rules of evidence" impossible:
What are these corrective mechanisms?
I ask not because I doubt they exist but because I suspect that they do -- & that their operation has evaded full understanding because of a mistaken assumption central to the contemporary study of cognition.
That assumption is that reasoning proficiencies--the capacity to recognize covariance, give proper effect to base rates, distinguish systematic relationships from chance co-occurrences, & perform like mental operations essential to making valid inferences--are more or less discrete, stand-alone "modules" within a person's cognitive repertoire.
If the modules are there, and are properly calibrated, a person will reliably summon them for any particular task that she happens to be doing that depends on that sort of mental operation.
Call this the "domain independent" conception (DI) of cognitive proficiency. DI is presupposed by standardized assessments like the Cognitive Reflection Test (Frederick 2005) and Numeracy (Peters et al. 2006), which purport to measure the specified latent reasoning capacities "in general," that is, abstracted from anything in particular one might use them for.
Another conception sees cognitive proficiency as intrinsically domain specific. On this view--call it the DS conception--it's not accurate to envision reasoning abilities of the sort I described as existing independently of the activities that people use them for (cf. Heatherington 2011).
Accordingly, a person who performs miserably in a context-free assessment of, say, the kind of logical-reasoning proficiency measured by an abstract version of a the Wason Selection Task-- one involving cards with vowels and numbers on either side -- might in fact always (or nearly always!) perform that sort of mental operation correctly in all the real-world contexts that she is used to encountering that require it. In fact, people do very well at the Wason Selection Task when it is styled as something more familiar--like detecting a norm violator (Gigenrenzer & Hug 1992).
In sum, reasoning proficiencies are not stand-alone modules but integral components of action-enabling mental routines that are reliably summoned to mind by a person's perception of the sorts of recurring problem situations those routines, including their embedded reasoning proficiencies, help her to negotiate.
DS is suspicious of standardized assessments, including the usual stylized word problems that are thought by decision scientists to evince one or another type of "cognitive bias." By (very deliberately) effacing the contextual cues that summon to mind the mental routines and embedded reasoning proficiencies necessary to address recurring problem situations, such tests grossly overstate the "boundedness" of human rationality (Gigenrenzer 2000).
Indeed, by abstracting from any particular use to which people might put the reasoning proficiencies they are evaluating, such assessments and problems are actually measuring only how good people are at doing tests. In fact, people can train themselves to become very proficient at a difficult type of reasoning task for purposes of taking an exam on it and then evince complete innocence of that same sort of knowledge in the real-world settings where it actually applies (DiSessa 1982)!
DI and DS have different accounts of "expertise" in fields that involve reasoning tasks that are vulnerable to recurring cognitive biases. DI identifies that expertise with the cultivation of general, context-free habits of mind that evince the disposition to use "conscious, effortful" ("system 2") forms of information processing (Sunstein 2005).
DS, in contrast, asserts that "expertise" consists in the possession of mental routines, and their embedded reasoning proficiencies, specifically suited for specialized tasks. Those mental routines include the calibration of rapid, intuitive, pre-conscious, affective forms of cognition (or better, recognition) that reliably alert the expert to the need to bring certain conscious, effortful mental operations to bear on the problem at hand. The proper integration of reciprocal forms of intuitive and conscious forms of cognition tailored to specialized tasks is the essence of professional judgment.
Nonexperts can be expected to display one or another bias when confronted with those same problems. But the reason isn't that the nonexpert "thinks differently" from the expert; it's that the expert has acquired through training and experience mental routines suited to do things that are different from anything the ordinary person has occasions to do in his or her life (Margolis 1987, 1993, 1996).
Indeed, if one confronts an expert with a problem divorced from all the cues that reliably activate the cognitive proficiencies she uses when she performs professional tasks, one is likely to find that the expert, too, is vulnerable to all manner of cognitive bias.
But if one infers from that that the expert therefore can't be expected to resist those biases in her professional domain, one is making DI's signature mistake of assuming that reasoning proficiencies are stand-alone modules that exist independent of mental routines specifically suited for doing particular things (cf. Kahan, Hoffman, Evans,Luci, Devins & Cheng in press) ....
Or that at leas is what a DS proponent would say.
Not because "jurors" or other "nonexperts" are "stupid." But because it is stupid to think that doing what is required to make accurate findings of fact in legal proceedings does not depend on the cultivation of habits of mind specifically suited for that task.
I tend to think the DS proponent comes closer to getting it right. But of course, I'm not really sure.
DiSessa, A.A. Unlearning Aristotelian Physics: A Study of Knowledge‐Based Learning. Cognitive science 6, 37-75 (1982).
Frederick, S. Cognitive Reflection and Decision Making. Journal of Economic Perspectives 19, 25-42 (2005).
Gigerenzer, G. Adaptive thinking : rationality in the real world (Oxford University Press, New York, 2000).
Gigerenzer, G. & Hug, K. Domain-specific reasoning: Social contracts, cheating, and perspective change. Cognition 43, 127-171 (1992).
Hetherington, S.C. How to know : a practicalist conception of knowledge (J. Wiley, Chichester, West Sussex, U.K. ; Malden, MA, 2011).
Kahan, D.M., Hoffman, D.A., Evans, D., Devins, N., Lucci, E.A. & Cheng, K. 'Ideology'or'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment. U. Pa. L. Rev. 164 (in press).
Margolis, H. Dealing with risk : why the public and the experts disagree on environmental issues (University of Chicago Press, Chicago, IL, 1996).
Margolis, H. Paradigms and Barriers (1993).
Margolis, H. Patterns, thinking, and cognition : a theory of judgment (University of Chicago Press, Chicago, 1987).
Peters, E., Västfjäll, D., Slovic, P., Mertz, C.K., Mazzocco, K. & Dickert, S. Numeracy and Decision Making. Psychol Sci 17, 407-413 (2006).
Sunstein, C.R. Laws of fear : beyond the precautionary principle (Cambridge University Press, Cambridge, UK ; New York, 2005).