I attended a great conference on "self-deception" sponsored by the Institute for Advanced Study at L'université Toulouse Capitole (UT Capitole).
The concept of "self-deception" encompasses forms of information-processing that predictably bias individuals' beliefs toward some self-serving end or goal.
The main theoretical/scholarly issues are two: first, whether "self-deception" is at least under some circumstances "rational" or in any case beneficial to those who engage in it; and second, whether there is a cogent psychological mechanism that could explain the feasibility of this sort of rational or "adaptive" self-deception, given that presumably it is self-defeating to pursue such a state consciously (b/c if one knows one is decieving oneself, one will not be deceived into subscribing to the false belief).
We heard many interesting takes on these questions.
I made two principal points.
First, contrary to the dominant decision-science and political science accounts, identity-protective cognition --the species of motivated reasoning that generates political polarization on decison-relevant science -- is not a consequence of over-reliance on heuristic or "system 1" information processing; indeed, it is magnified by proficiency in one or another of the reasoning dispositions associated with conscious, effortful form of information processing associated with "System 2"
Or so I argued on the basis of various CCP study results.
To me this suggests it is not tenable to see identity-protective reasoning as a "cognitive bias."
It is individually rational to process information on societal risks in this manner when one's own exposure to that risk is not materially affected by the correctness of one's views but where one's status in one's cultural group is very much affected by the congruity of one's beliefs with those that predominate in the group.
This is so for climate change, gun control, fracking, etc.
Of course, if everyone engages in this individually rational mode of information processing at the same time, the results can be collectively disastrous. Under these conditions, culturally diverse citizens will fail to converge on the best currently available evidence essential to enactment of democratic laws that protect the welfare of all.
That consequence, though, won't change anyone's individual psychic incentives to process information in the personally beneficial manner associated with identity-protective cognition. This is, as I've described it before, the "tragedy of the science communications commons."
This point aligned me pretty squarely with the economist contingent at the conference, which was mainly intent on demonstrating that "self-deception" is "rational" in the sense of welfare-maximizing at the individual level.
My second point was less in line with the views of the economists but likely more in line with at least some of the members of psychologist contingent at the conference (& I think with Richard Holton, the lone philosopher on the program, who gave a very insightful & helpful talk).
The point was that I didn't really think it was theoretically cogent or psychologically realistic to describe identity-protective reasoning as a form of self-deception.
It's true that this mode of information processing systematically promotes formation of beliefs that aren't aligned to the best currently available evidence. (There was some pushback on this along the predictable "but that's perfectly consistent with Bayesianism..." lines. It never ceases to astonish me how many economists & political scientists have trouble grasping the conceptual distinction between truth-convergent Bayesian updating, in which one's priors are updated on the basis of evidence the likelihood ratio or weight of which is determined on the basis of independent truth-convergent criteria; and confirmation bias, in which one uses one's priors to determine the likelihood ratio assigned to new evidence.)
But I don't really see why this makes identity-protective cognition an instance of "self-deception."
People do things with information other than use it to form "accurate beliefs." One of those other things they use information for is to cultivate dispositions that evince their commitment to values that unite them with other members of affinity groups important to their identity.
Sometimes the way to evince such commitments is by holding certain beliefs about risks or other related facts that, by virtue of one or another socially and historically contignent set of events, has come to be understood as a badge of membership in a particular cultural group.
If the person has no other purpose for the belief in question, then someone who forms beliefs using this style of information processing is not deceiving him- or herself at all, any more than such a person would be if the person used this form of information processing, say, to form the disposition to leave a tip at a restaurant (Frank 1988).
Or so it seems to me.
I think the reason so many scholars regard this form of information processing as "self-deception" is rooted in a psychologically implausible view of "beliefs" as isolated states of assent or nonassent to factual propositions.
The mind is not a registry of atomistic propositional stances.
It comprises a wide array of mental routines, which themselves consist of bundles of intentional states--desires, emotions, moral evaluations--each of which is suited for doing something.
As elements of these action-enabling ensembles, beliefs are dispositions to action (Pierce 1877; Braithwaite 1946).
If someone is using a style of information processing to form clusters of intentional states that reliably alert and motivate him or her to display identity-congruent societal risk perceptions in appropriate circumstances, then that person is is doing with his or her reason something akin to what someone does when internalizing a disposition to conform to norms that signify being a socially competent actor.
In this sense, "beliefs" in "climate change," "evolution," "the deterrent effect of gun control laws" & the like are more akin to action-promoting attitudes than bare states of assent or non-assent to context-free factual propositions.
If one accepts this view, none of the puzzles that vex "self-deception" need arise.
A person who forms "beliefs" on these issues in the course of cultivating affective states that express his or her identity (Akerlof & Cranton 2000; Anderson 1993) is not "deceiving" him- or herself -- or anyone else --about anything.
This assumes, of course, that this is what a person is doing with information relevant to forming a "belief" on a risk or like fact.
Sometimes people do other things with such beliefs-- like be good "doctors," or "farmers," or "judges" or other types of professionals.
In that case, we might see "cognitive dualism," the condition in which the actor forms opposing states of beliefs as part of separate and discrete action-enabling ensembles of intentional states.
The Pakistani Dr "disbelieves in" evolution at home to be a good Muslim, but "believes in" it at work to be a good Dr.
The Kentucky Farmer, likewise, "disbelieves in" climate change to be a good Hierarch Individualist, in the settings where that is what he is doing; but "believes in" it when he is atop his tractor engaged in "zero tillage" or like practices that he knows will help him master the challenges that global warming is going to create for success in his occupation.
The propositional stances in the disbelief-belief couplings are indeed inconsistent if we abstract them from the action-enabling ensemble of mental states of which they are a part.
But doing that is not faithful to the agent's psychology. The opposing "beliefs" and "disbeliefs" don't exist apart from the action-enabling bundles of intentional states they reside in. If those actions aren't inconsistent, then there's no "conflict" between any meaningful mental object that resides in the agent's mind.
Introduced with a discussion of the Pakistani Dr & the Kentucky Farmer, this last point -- about cognitive dualism -- predictably dominated discussion.
I'm not sure how I feel about that.
It's interesting and fun to see people struggle with the point (especially when one invokes Kantian dualism & adds a Laplacian cosmologist who is proud of his or her children to the mix).
But if that point isn't really the point of the presentation, it can end up being a bit of show stealer and ultimately a distraction.
That doesn't make me doubt "cognitive dualism," of course. If anything, it strengthens my resolve to investigate it; that it bothers and disorients people so much means something, I suspect.
But "cognitive dualism" is severable from "motivated system 2" reasoning, certainly, and I don't want to leave anyone with any misimpressions about that.
Better to address difficult issues one at a time.
But here is something that can be figured out w/o any great difficulty at all: L'université Toulouse is really cool! I was awed at the number of talented scholars engaged both in high-level investigations of human behavior and high-level scholarly exchange w/ one another across disciplines.
Akerlof, G.A. & Kranton, R.E. Economics and identity. The Quarterly Journal of Economics 115, 715-753 (2000).
Frank, R.H. Passions within reason : the strategic role of the emotions (Norton, New York, 1988).
Braithwaite, R.B. The Inaugural Address: Belief and Action. Proceedings of the Aristotelian Society, Supplementary Volumes 20, 1-19 (1946).
Pierce, C.S. The Fixation of Belief. Popular Science Monthly, 12, 1-15 (1877).