“System 1” and “System 2” are intuitively appealing but don’t make sense on reflection: Dual process reasoning & science communication part 1

“Dual process” theories of cognition (DPT) have been around a long time but have become dominant in accounts of risk perception and science communication only recently, and in a form that reflects the particular conception of DPT popularized by Daniel Kahneman, the Nobel Prize winning behavioral economist.

In this post–the first in a 2-part series– I want to say something about why I find this conception of DPT unsatisfying.  In the next, I’ll identify another that I think is better.

Let me say at the outset, though, that I don’t necessarily see my argument as a critique of Kahneman so much as an objection to how his work has been used by scholars who study public risk perceptions and science communication.  Indeed, it’s possible Kahneman would agree with what I’m saying, or qualify it in ways that are broadly consistent with it and that I agree improve it.

So what I describe as “Kahneman’s conception, while grounded in his own exposition of his views, should be seen as how his position is understood and used by scholars diagnosing and offering prescriptions for the pathologies that afflict public risk perceptions in the U.S. and other liberal democratic socieities.

This conception of DPT posits a sharp distinction between two forms of information processing: “System 1,” which is “fast, automatic, effortless, associative and often emotionally charged,” and thus “difficult to control or modify”; and “System 2,” which is “slower, serial, effortful, and deliberately controlled,” and thus “relatively  flexible and potentially rule-governed.” (Kahneman did not actually invent the “system 1/system 2” terminology; he adapted it from Keith Stanovich and Richard West, psychologists whose masterful synthesis of dual process theories is subject to even more misunderstanding and oversimplification than Kahneman’s own)

While Kahneman is clear that both systems are useful, essential, “adaptive,” etc., System 2 is more reliably connected to sound thinking.

In Kahneman’s scheme, System 1 and 2 are serial: the assessment of a situation suggested by System 1 always comes first, and is then—time, disposition, and capacity permitting—interrogated more systematically by System 2 and consciously revised if in error.

All manner of “bias,” for Kahneman, can in fact be understood as manifestations of people’s tendency to make uncorrected use of intuition-driven System 1 “heuristics” in circumstances in which the assessments that style of reasoning generates are wrong.

Human rationality is “bounded” (an idea that Kahneman and those who elaborate his framework take from the pioneer decision scientist Herbert Simon) but how perfectly individuals manifest rationality in their decisionmaking, on Kahneman’s account, reflects how adroitly they make use of the “monitoring and corrective functions of System 2” to avoid the “mistakes they commit” as a result of over-reliance on System 1 heuristics.

This account has attained something akin to the status of an orthodoxy in writings on public risk perception and science communication (particularly in synthetic works in the nature of normative and prescriptive “commentaries,” as opposed to original empirical studies).  Popular writers and even many scholars use the framework as a sort of template for explaining myriad public risk perceptions—from those posed by climate change and terrorism,  nuclear power and genetically modified foods—that, in these writers’ views, the public is over- or underestimating as a result of its reliance on “rapid, intuitive, and error-prone” System 1 thinking, and that experts are “getting right” by relying on methods (such as cost-benefit analysis) that faithfully embody the “deliberative, calculative, slower, and more likely to be error-free” assessments of System 2.

This is the account I don’t buy.

It has considerable intuitive appeal, I agree.  But when you actually slow down a bit and reflect on it, it just doesn’t make sense.

The very idea that “conscious” thought “monitors” and “corrects” unconscious mental operations is psychologically incoherent.

There is no thought that registers in human consciousness that wasn’t, an instant earlier, residing (in some form, but unlikely one that could usefully be described as a “thought” or at least anything with a concrete, articulable propositional content) in some element of a person’s “unconsciousness.”

Moreover, whatever yanked it out of the stream of unconscious “thought” and projected it onto the screen of consciousness also had to be an unconscious mental operation.  Even if we imagine (cartoonishly) that there was a critical moment in which a person consciously “noticed” a useful unconscious “thought” floating along and “chose” to fish it out, some unconscious cognitive operation had to occur prior to that for the person to “notice” that thought, as opposed to the literally infinite variety of other alternative stimuli, inside the mind and out, that the person could have been focusing his or her conscious attention on instead.

Accordingly, whenever someone successfully makes use of the “slower, serial, effortful, and deliberately controlled” type of information processing associated with System 2 to “correct” the “fast, automatic, effortless, associative and often emotionally charged” type of information processing associated with System 1, she must be doing so in response to some unconscious process that has reliably identified the perception at hand as one in genuine need of conscious attention.

Whatever power “deliberative, calculative, slower,” modes of conscious thinking have to “override” the mistakes associated with the application of “rapid, intuitive, and error-prone” intuitions about risk, then, necessarily signify the reliable use of some other form of unconscious or pre-conscious mental operations that in effect “summon” the faculties associated with effortful System 2 information processing to make the contribution that they are suited to making to information processing.

Thus, system 2 can’t only reliably “monitor” and “correct” System 1 (Kahneman’s formulation) unless System 1 (in the form of some pre-conscious, intuitive, affective, automatic, habitual, uncontrolled etc mental operation) is reliably monitoring itself.

The use of System 1 cognitive processes might be integral to the “boundedness” of human rationality.  But how close anyone can come to perfecting rationality necessarily depends on the quality of those very same processes.

The problem with the orthodox picture of deliberate, reliable conscious,”System 2″ checking impetuous, impulsive “System 1” can be called the “system 2 ex nihilo fallacy”: the idea that the form of conscious, deliberate thinking one can use to “monitor” and “correct” automatic, intuitive assessments just spontaneously appears—magically, “out of nothing,” and in particular without the prompting of unconscious mental processes—whenever heuristic reasoning is guiding one off the path of sound reasoning.

The “System 2 ex nihilo fallacy” doesn’t, in my view, mean that dual process reasoning theories are “wrong” or “incoherent” per se.

It means only that the truth that such theories contain can’t be captured by a scheme that posits the sort of discrete, sequential operation of “unconscious” and “conscious” thinking that is associated with the view I’ve been describing—a conception of DPT that is, as I’ve said, pretty much an orthodoxy in popular writing on public risk perception and science communication.

In part 2 of this series, I’ll suggest a different conception of DPT that avoids the “System 2  ex nihilo fallacy.”

It is an account that is in fact strongly rooted in focused study of risk perception and science communication in particular.  And it furnishes a much more reliable guide for the systematic refinement and extension of the study of those phenomena than the particular conception of DPT that I have challenged in this post.

Kahneman, D. Maps of Bounded Rationality: Psychology for Behavioral Economics. Am Econ Rev 93, 1449-1475 (2003).

Simon, H.A. Models of bounded rationality (MIT Press, Cambridge, Mass.; 1982).

Stanovich, K.E. & West, R.F. Individual differences in reasoning: Implications for the rationality debate? Behavioral and Brain Sciences 23, 645-665 (2000).

Sunstein, C.R. Laws of Fear: Beyond the Precautionary Principle. (Cambridge University Press, Cambridge, UK ; New York; 2005).

Leave a Comment