follow CCP

Recent blog entries
Tuesday
Jan132015

Science of Science Communication 2.0, Session 1.1: The HPV Vaccine Disaster

page 1 of the course materials-- click it!Okay, so here is the first post for the "virtual" Science of Science Communication course 2.0. Actually, I'll be teaching/learning/attending the first "real world" session in about 30 mins.  

Today's readings are in the nature of a "case study" of the introduction of the HPV vaccine in the U.S. & its status today. The material below is in the nature of a "set up" for discussion, which I encourage people to have whereever they want but also in the comments section.  

I'm designating this "Session 1.1" in anticipation that I might myself post something-- in the nature of a "follow up" -- in which case I'll designate "Session 1.2."  Or maybe I'll do something else, who knows.

BTW check out this super cool & generous invitation if you are looking for "virtual" classmates & course materials!!

The introduction of the HPV vaccine in the US-- or the cost of being innocent & ignorant of the science of science communication. . . .

1. Merck’s application for “fast-track” review of female-only shot. It is late 2005. Merck, the manufacturer of the HPV vaccine Gardasil, has applies for “fast-track” FDA approval of a female-only shot.

HPV—the human papilloma virus—is a sexually transmitted pathogen. Exposure is widespread: some 45% of women in their early 20s have been infected. A comparable percentage of men almost certainly have been, too, although there is at this time no effective test for males.

HPV causes genital warts in some but not all infected individuals.

It is also the sole cause of cervical cancer. A diseases that can normally be detected at an early stage by a routine pap smear and thereafter successfully treated, cervical cancer nevertheless claims the lives of 3,000 women per year in the U.S. (many more in undeveloped countries that lack effective public health systems).

Clinical tests show that Gardasil confers immunity against 70% of the strains of HPV that can cause cervical cancer. This evidence furnishes logical reason to believe that widespread immunization would reduce cervical cancer rates, although the vaccine is in fact too new, and experience with it in nations where it is already in use too limited, for that proposition to have been empirically tested.

The role of HPV in causing cervical cancer is the basis for Merck’s application for “fast track” review, which is available only for drugs that furnish an “unmet medical need” for treatment or prevention of a “serious disease.” The link to cervical cancer is also why the “fast track” application is for a female-only shot: genital warts are not considered a “serious disease,” and while HPV might cause oral or anal cancer in men, there is at this time insufficient evidence to be sure.

If put on the “fast track,” Gardasil will likely be approved for use for women within six months. The FDA review process would otherwise be expected to take three additional years. Within that time frame, in fact, the FDA is likely to approve for males and females both Gardasil and Cervarix, an HPV vaccine manufactured by GlaxoSmithKline and already approved for use in Europe.

2. Health risks? Clinical trials suggest no reason to believe Gardasil poses a risk of dangerous side-effects. Some critics question the quality of this evidence, however, noting the recent withdrawal of Merck’s anti-inflammatory Vioxx based on evidence, known but not initially acknowledged by Merck, that showed the drug increased the risk of heart attacks and strokes.

Other critics suggest that widespread HPV immunization could have perverse behavioral consequences. To be effective, immunization should occur during adolescence, before an individual is likely to have become exposed to the disease through sexual activity. Some groups, including social conservative and religious ones, have voiced concern that immunization will generate a sense of false security in teenage girls, who will therefore be more likely to engage in unprotected sex, exposing themselves to a higher risk of pregnancy or other STDs. There is currently no evidence one way or the other on whether HPV immunization of adolescent girls would have any such effects.

3. The proposed legislative initiative. In addition to seeking fast-track approval of Gardasil, Merck is known to be organizing a nationwide lobbying campaign aimed at securing legislation adding the HPV vaccination to the schedule of universal childhood immunizations treated as a condition of public-school enrollment.

As part of this effort, Merck has reached out to women’s health advocacy groups. These groups strongly support making the HPV vaccine available in the U.S. Merck has proposed that these groups play a lead role in the company’s lobbying campaign, which would be funded by Merck. Merck is also understood to be searching for social conservatives to participate in the campaign.

4. Physicians’ views. There is every reason to believe physicians will view the availability of an HPV vaccination as a very positive development. No major U.S. medical association, however, has taken a position on either Merck’s fast-track proposal or on adding the HPV vaccine to states’ school-enrollment immunization schedules.

At least some physicians, however, have voiced criticism of how the vaccine is being introduced. They assert that Merck’s fast-track application and its planned nationwide legislative campaign are economically motivated: Merck’s goal, they have argued (in various fora, including medical journal commentaries), is to establish a dominant position in the market before the FDA approves of GlaxoSmithKline’s rival Cervarix vaccine. Whatever public health benefits might be associated with accelerating the speed with which Gardasil is approved and HPV vaccine added to universal vaccination schedules, these commentators have warned, will be offset by the increased risk of a political backlash.

5. Political controversy? At this point, there is no meaningful dispute over Gardasil. Indeed, only a minute fraction of the U.S. population has ever heard of the vaccine or even HPV for that matter.

Nevertheless, the prospect of controversy has already been anticipated in the national media. A government-mandated STD shot for adolescent girls, these sources predict, is certain to provoke confrontation between women’s rights groups and religious and social conservatives.

Aside from some women’s’ health groups, the only other advocacy group to address the HPV vaccine is the Family Research Council. Committed to protecting religious values in American life, FRC has played a major role in opposing public-school instruction on birth control. The FRC has stated that it does not oppose—indeed, “welcomes”—the introduction of the HPV vaccine, but views state-mandated vaccination as interfering with parental control of their children’s’ health and their sexual behavior.

5. The HBV vaccine. The HPV vaccine would not be the first STD immunization to be placed on states’ school-enrollment vaccination schedules. A decade ago the FDA approved the HBV vaccine for hepatitis-b, a sexually transmitted disease that causes a lethal form of liver cancer. The CDC quickly recommended that the vaccine, which had been approved for both males and females, be added to the list of universal childhood immunizations. Within several years, almost every state had added the HBV vaccine to its mandatory-immunization schedule via regulations issued by state public health officials, the conventional—and politically low-profile process—for updating such provisions. The addition of the HBV vaccine to the state schedules generated no particular controversy, and the nationwide vaccination rate for HBV, like other childhood immunizations, has consistently been well over 90%.

6. “Public acceptance” research. Public health researchers have conducted studies specifically aimed at assessing the public acceptability of an adolescent HPV shot. These studies, which consist of surveys of parents with adolescent children, uniformly report that parents say they are unfamiliar with the HPV vaccine but will have their children immunized if their pediatricians recommends doing so.

Issues. Should the FDA grant Merck’s application for fast-track review? Should Merck withdraw it? Should women’s’ advocacy groups agree to participate in the company’s nationwide legislative campaign? What position, if any, should medical professional associations take? Is the position of social conservative groups like the FRC relevant to these questions?

Thursday
Jan082015

The "disentanglement research program": a fragment

From something I'm working on -- & closely related to what is described here, of course.

The "disentanglement project": an empirical research program

“Evolution” refers not only to a scientifically grounded account of the natural history of life on earth but also to a symbol in relationship to which people's stances signify membership in one or another cultural group.  The confounding of the former and the latter are at the root of a cluster of related societal problems. One is simply how to measure individual comprehension of evolutionary science and science generally. Another is how to impart collective knowledge on terms that avoid needlessly conditioning its acquisition on an abandonment or denigration of cultural commitments collateral to science.  And a final problem is how to protect the enterprise of acquiring, assessing, and transmitting knowledge from becoming a focal point for cultural status competition corrosive of the reciprocal benefits that science and liberal democratic governance naturally confer on one another.  This paper discuss the “disentanglement project,” an empirical research program aimed at identifying an integrated set of practices for unconfounding the status of evolution as a token of collective knowledge and a symbol of cultural identity within the institutions of the liberal state. 

Wednesday
Jan072015

So you want to meet the 'Pakistani Dr'? Just pay a visit to the Kentucky Farmer

I now realize that a lot of people think that Hameed’s Pakistani Dr—who without apparent self-contradiction “disbelieves” in evolution “at home” but “believes” in it at work—is a mystery the solution to which must have something to do with his living in Pakistan (or at least having grown up and gone to school there before moving to the US to practice medicine (Everhart & Hameed (2013)).

That’s a big mistake! 

Indeed, in my view it gets things exactly backwards: what makes the Pakistani Dr so intriguing, & important, is that he is the solution to mysteries about the psychology of a lot of people born & bred right here in the U.S. of A!

One place where you can find a lot of Pakistani Drs, e.g., is in the South & Midwest, where their occupation of choice is farming.

Public opinion studies consistently find that farmers are deeply skeptical of climate change (e.g., Prokopy et al. 2014).

Which is to say, when you ask them if they believe human fossil-fuel burning is heating up the planet, they say, “Heck no! Don’t give me that Al Gore bull shit!”

But that’s what happens, you see, if you ask them about what they believe “at home.” 

If you ask them what they believe “at work,” where they must make practical decisions based on the best available evidence, then you are likely to get a completely different answer!

Or so a group of researchers recently reported in an amazingly cool study published in the Journal of Agricultural and Applied Economics (Rejesus, Hensley, Mithcell, Coble & Knight 2013).

Analyzing the results of an N = 1380 USDA-conducted survey of farmers in Mississippi, North Carolina, Texas, and Wisconsin, RHMCK reported that less than 50% in each state agreed with the statement, “I believe human activities are causing changes in the earth climate.”

Indeed, only a minority—around a quarter of the respondents in Mississippi, Texas, and Wisconsin; a bit over a third in North Carolina—indicated that they “believe climate change has been scientifically proven” at all.

But when these same respondents answered questions relating to how climate change would affect farmers, only a small minority expressed any doubt whatsoever that the impact would be considerable.

For example,

nearly 60% of producers in Mississippi and Texas, states where scientific proof of climate change is typically not agreed to, believe there will be some change in crop mix resulting from climate change.

Majorities in Mississippi (55%) and North Carolina (56%) also indicated that it was likely that, in response to climate change, farmers in their state would be buying more crop insurance to protect them from the increased variability in yields associated with a higher incidence of extreme weather events.

Of course, you can insure yourself from risks only if the benefits exceed the expected costs of enduring them. A lot of farmers think that farming won't be profitable be in the future-- thanks to climate change.

In North Carolina (57%) and Texas (51%), a majority of the respondents indicated that they thought it was either “likely” or “extremely likely” that climate change would force some farmers out of business.

In none of the states did anything even close to a majority indicate that they thought it was either "unlikely” or “extremely unlikely" that farmers would resort to greater crop rotation, increased insurance coverage, or simply quitting the business altogether in response to climate change.

Obviously, some fraction of the positive responses to these questions came from the minority of farmers in these states who indicated that they do believe climate change is "scientifically proven." 

But it turns out the views of “believers” and “disbelievers” on these matters didn’t vary by much.

  • Likely that farmers will resort to crop diversification as a result of climate change
    Believers: 51% agree 
    Disbelievers: 47% agree

  • Likely that farmers will be driven out of business by climate change:
    Believers: 50% agree
    Nobelievers: 47% agree

  • Likely that farmers will acquire greater crop insurance protection to deal with climate change:
    Believers: 56% agree
    Nonbelievers: 45%, agree

These self-report data, moreover, match  up quite well with behavioral data, which show that climate-skeptical farmers are already adopting practices (like no-till planting, new patterns of crop rotation, adjustments in growing season projections) in anticipation of climate impacts.

Business actors, moreover, are rushing in to profit from the willingness of farmers to pay for services and technologies that will help them weather climate change. Just ask Monsanto, which is perfectly happy to proclaim its belief in climate change, how excited farmers are about its climate-change resistant GM crops, as well as the company’s new business ventures in supplying climate data and climate-change crop insurance.

How to make sense of this?

 The most straightforward answer is the one set forth in the Measurement Problem (Kahan in press): whether people say they “believe” or “disbelieve in” human-caused climate change is not a valid measure of what they know about climate science; rather it is simply an indicator ofidentity on a par with people’s responses to items that solicit their cultural values, their right-left political outlooks, their religiosity or whathaveyou.

Farmers who express their cultural identity by saying they “disbelieve in” human-caused climate change actually do know a lot about it—much more, probably, than the average person who says he or she does “believe in” climate change but who it turns out is highly likely to think that global warming is caused by sulfur emissions and will stifle photosynthesis in plants.

What do "believers" & "nonbelievers" in human-caused climate change know about climate science? Not much! Click & see for self.In the Measurement Problem study, I used a climate-literacy assessment instrument the items of which were carefully calibrated to disentangle or unconfound "identity" and "knowledge."

To me, the RHMCK results suggest that one can unconfounded "identity" and "knowledge" in an equivalent way with items that, unlike the cultural-identity-eliciting "do you believe in climate change" item, effectively assessed what farmers understood the evidence of climate change to signify for their vocation.

Cool, okay.

But the much more difficult question is—what exactly is going on in the heads of those farmers who clearly comprehend the evidence but who say they “don’t believe in” climate change?

This is exactly what the Pakistani Dr has been trying, so patiently, to help us figure out!

If he hadn't been so persistent in trying to pierce through the dense armor of my incomprehension, I would have had nothing more to say than what I just did—viz., that what a farmer in Mississippi, Texas, North Carolina, or Wisconsin says he “believes” about climate change measures something entirely different from what he “knows” about it.

But now, thanks to what the Dr has taught me, I have a hunch that the “climate change” that that farmer doesn’t “believe in” & the "climate change” he does “believe in” are, as the Dr would say, "entirely different things!"

“Climate change,” certainly, can be defined with reference solely to a state of affairs, or the evidence for it.

But as an object of belief or knowledge, climate change can’t be defined that way.

It’s just plain weird, really, to imagine that if we could somehow take a person, unscrew the lid of his mind, turn him upside down, and shake him a bit, a bunch of discrete “beliefs” would fall onto the ground in front of us. 

What we believe or know—the objects of those intentional states—don’t have any existence independently of what we do with them.  The kinds of things we do, moreover, are multiple and diverse—and correspond to the multiple and diverse roles our integrated identities comprise.

The Pakistani Dr is an oncologist and a proud member of a science-trained profession.  His belief in evolution enables him to be those things.

He is also a devout Muslim.  His disbelief of evolution enables him to be that—when being that is what he is doing.

There’s no conflict!, he keeps insisting. The evolution he “accepts” and the evolution he “rejects” are entirely different things—because the things he is doing with those intentional states are entirely different, and, fortunately for him, perfectly compatible with each other in the life he leads.

This is Scott Travis, the Kentucky farmer. Click to have a conversation. He can teach you something.Well, for the Kentucky (Mississippi/Texas/North Carolina/Wisconsin/Indiana  etc.) farmer, there are two climate changes: the one he rejects to protect his standing in a particular cultural community engaged in an ugly status competition with another whose members’ “belief in” climate change serves the same function; and the one the Kentucky farmer accepts in the course of using his reason to negotiate the challenges of his vocation.

Sadly, the Kentucky farmer lives in a society that makes reconciling the diverse roles that he occupies—the different things he is enabled to do—by “believing in” one “climate change” and “disbelieving in” another much less straightforward, routine--boring even--than what the Pakistani Dr does when he accepts one evolution and reject another.

This is a big problem.  Not just for the Kentucky farmer but for all those who live in the society so many of whose members find what the Kentucky farmer is doing with his reason not only incomprehensible but simply intolerable. 

References

Everhart, D. & Hameed, S. Muslims and evolution: a study of Pakistani physicians in the United States. Evo Edu Outreach 6, 1-8 (2013).

Kahan, D. Climate Science Communication and the Measurement Problem. Advances in Pol. Psych (in press).

Prokopy, L.S., Morton, L.W., Arbuckle, J.G., Mase, A.S. & Wilke, A. Agricultural stakeholder views on climate change: Implications for conducting research and outreach. Bulletin of the American Meteorological Society  (2014).

Rejesus, R.M., Mutuc-Hensley, M., Mitchell, P.D., Coble, K.H. & Knight, T.O. US Agricultural Producer Perceptions of Climate Change. Journal of Agricultural and Applied Economics 45 (2013).

 

 

Tuesday
Jan062015

"Science of Science Communication" course, version 2.0

This semester I will be teaching my "science of science communication" course for 2nd time.  

I got my act together this time, too, and had the course, which is a Psychology Dept graduate offering, cross-listed in the School of Public Health, the School of Forestry and Environmental Studies, plus the Law School.  The value of the "science of science communication," it seems to me, depends entirely on the function it can perform in integrating the production of scientific knowledge & science-informed policymaking, on the one hand, with scientific knowledge of the processes by which people come to know what is known by science, on the other. So obviously, offerings like this shouldn't be "in the course catalog" of only decision-science or communication-science disciplines.... 

Anyway, like last time, I'm going to see if I can offer a "virtual" counterpart of the course via this blog.

I'll post course materials, as they become available, here.  Unfortunately, I can't post the readings themselves, since access to portions of them is restricted to users covered by one or another of Yale University's site licenses or subscriptions to various commercial content providers. But I will post the reading lists & various "open access" materials.

After each "real space" session, though, I'll post some sort of synopsis or argument or whatever as a "starter" for discussion.  People can weigh in based on their access to that, plus whatever else they can get their hands on -- including materials other than those assigned to students enrolled in the course at Yale!

This worked pretty well last time, except I wasn't as conscientious as I should have been in posting "starters."

This time I'll do better!

Below I've posted the "course catalog" description of the course, plus the "manifesto" that introduces the course requirements & topics etc.

 

The Science of Science Communication, PSYC 601b, FES 862b, HPM 601, LAW 21141. The simple dissemination of valid scientific knowledge does not guarantee it will be recognized by non-experts to whom it is of consequence. The science of science communication is an emerging, multidisciplinary field that investigates the processes that enable ordinary citizens to form beliefs consistent with the best available scientific evidence, the conditions that impede the formation of such beliefs, and the strategies that can be employed to avoid or ameliorate such conditions. This seminar surveys, and makes a modest attempt to systematize, the growing body of work in this area. Special attention is paid to identifying the distinctive communication dynamics of the diverse contexts in which non-experts engage scientific information, including electoral politics, governmental policy making, and personal health decision making.

* * *

1. Overview. The most effective way to communicate the nature of this course is to identify its motivation.  We live in a place and at a time in which we have ready access to information—scientific information—of unprecedented value to our individual and collective welfare. But the proportion of this information that is effectively used—by individuals and by society—is shockingly small. The evidence for this conclusion is reflected in the manifestly awful decisions people make, and outcomes they suffer as a result, in their personal health and financial planning. It is reflected too not only in the failure of governmental institutions to utilize the best available scientific evidence that bears on the safety, security, and prosperity of its members, but in the inability of citizens and their representatives even to agree on what that evidence is or what it signifies for the policy tradeoffs acting on it necessarily entails.

This course is about remedying this state of affairs. Its premise is that the effective transmission of consequential scientific knowledge to deliberating individuals and groups is itself a matter that admits of, and indeed demands, scientific study.  The use of empirical methods is necessary to generate an understanding of the social and psychological dynamics that govern how people (members of the public, but experts too) come to know what is known to science. Such methods are also necessary to comprehend the social and political dynamics that determine whether the best evidence we have on how to communicate science becomes integrated into how we do science and how we make decisions, individual and collective, that are or should be informed by science.

Likely you get this already: but this course is not simply about how scientists can avoid speaking in jargony language when addressing the public or how journalists can communicate technical matters in comprehensible ways without mangling the facts.  Those are only two of many science communication” problems, and as important as they are, they are likely not the ones in most urgent need of study (I myself think science journalists have their craft well in hand, but we’ll get to this in time).  Indeed, in addition to dispelling (assaulting) the fallacy that science communication is not a matter that requires its own science, this course will self-consciously attack the notion that the sort of scientific insight necessary to guide science communication is unitary, or uniform across contexts—as if the same techniques that might help a modestly numerate individual understand the probabilistic elements of a decision to undergo a risky medical procedure were exactly the same ones needed to dispel polarization over climate science! We will try to individuate the separate domains in which a science of science communication is needed, and take stock of what is known, and what isn’t but needs to be, in each.

The primary aim of the course comprises these matters; a secondary aim is to acquire a facility with the empirical methods on which the science of science communication depends.  You will not have to do empirical analyses of any particular sort in this class. But you will have to make sense of many kinds.  No matter what your primary area of study is—even if it is one that doesn’t involve empirical methods—you can do this.  If you don’t yet understand that, then perhaps that is the most important thing you will learn in the course. Accordingly, while we will not approach study of empirical methods in a methodical way, we will always engage critically the sorts of methods that are being used in the studies we examine, and I from time to time will supplement readings with more general ones relating to methods.  Mainly, though, I will try to enable you to see (by seeing yourself and others doing it) that apprehending the significance of empirical work depends on recognizing when and how inferences can be drawn from observation: if you know that, you can learn whatever more is necessary to appreciate how particular empirical methods contribute to insight; if you don’t know that, nothing you understand about methods will furnish you with reliable guidance (just watch how much foolishness empirical methods separated from reflective, grounded inference can involve).

Friday
Jan022015

Humans using statistical models are embarrassingly bad at predicting Supreme Court decisions....

Part of Lexy's brain. From Ruger et al. (2004)Demoralizingly (for some people; I don't mind!), computers have defeated us humans in highly discerning contests of intellectual acuity such as chess and Jeopardy.

But what about prediction of Supreme Court decisions?  Can we still at least claim superiority there?

Well, you tell me.

In 2002-03, a group of scholars organized a contest between a computer and a diverse group of human "experts" drawn from private practice and the academy (Ruger, Kim, Martin & Quinn 2004).

Political scientists have actually been toiling for quite a number of years to develop predictive models for the Supreme Court.  The premise of their models is that the Court’s decisionmaking can be explained by “ideological” variables (Edwards & Livermore 2008).

In the contest, the computer competitor, Lexy (let’s call it), was programmed using the field’s state-of-the-art model, which in effect tries to predict the Court's decisions based on a combination of variables relating to the nature of the case and the parties, on the one hand, and the ideological affinity of individual Justices as reflected by covariance in their votes, on the other.

For this reason, the contest could have been seen (often is described) as one that tested the political scientists’ “ideology thesis” against “formal legal reasoning.” 

But in fact, that's a silly characterization, since the informed professional judgment of genuine Supreme Court experts would certainly reflect the significance of "Justice ideology" along with all the other influences on the Court’s decisionmaking (Margolis 1987, 1996; Llewellyn, 1960).

In any case, Lexy trounced those playing the role of “experts” in this contest.  The political scientists' model correctly predicted the outcome in 75% of the decisions, while the experts collectively managed only 59% correct . . . .

The result was widely heralded as a triumph for both  algorithmic decisionmaking procedures over expert judgment & for the political scientists’ “ideology thesis."

But here’s the problem: while Lexy “did significantly better at predicting outcomes than did the experts” (Ruger et al. 2004, p. 1152), Lexy did not perform significantly better than chance!

The Supreme Court’s docket is discretionary: parties who’ve lost in lower courts petition for review, and the Court decides whether to hear their cases.

It rejects the vast majority of review petitions—96% of the ones on the "paid" docket and 99% of those on the “in forma pauperis” docket, in which the petitioner (usually a self-represented prisoner) has requested waiver of the filing fee on grounds of economic hardship.

Not surprisingly, the Court is much more likely to accept for review a case in which it thinks the lower court has reached the wrong result.

Hence, the Court is far more likely to reverse than to affirm the lower court decision. It is not unusual for the Court to reverse in 70% of the cases it hears in a Term (Hofer 2010).  The average Supreme Court decision, in other words, is no coin toss!

Under these circumstances, the way to test the predictive value of a statistical model is to ask how much better someone using the model would have done than someone uniformly picking the most likely outcome--here, reversal-- in all cases (Long 1997, pp. 107-08; Pampel 2000, p. 51).

In the year in which Lexy squared off against the experts, the Court heard only 68 cases.  It reversed in 72% of them. 

Thus, a non-expert who knew nothing more than that the Supreme Court reverses in a substantial majority of its cases, and who simply picked "reverese" in every case, would have correctly predicted 72% of the outcomes.  The margin between her performance and Lexy's 75% -- a grand total of two fewer correct predictions -- doesn't differ significantly (p = 0.58) or practically from zero. 

A practical person, then, wouldn't bother to use Lexy instead of just uniformly predicting "reverse."

None of the “holy cow!” write-ups on Lexy’s triumph—which continue to this day, over a decade after the contest—mentioned that the algorithm used by Lexy had no genuine predictive value.

But to be fair, the researchers didn't mention that either.

They noted that the the Supreme Court had reversed in 72% of its cases only in footnote 82 of their 60-page, 122-footnote paper. And even there they didn't acknowledge that the predictive superiority of their model over the "72% accuracy rate" one would have achieved by simply picking "reverse" in all cases was equivalent—practically and statistically—to chance.  

Instead, they observed that the Court had in some "recent" Terms reversed less than 70% of the granted cases. The previous Terms were in fact the the source of the researchers' "training data"--the cases they used to construct their statistical model. They don't say, but one has to believe that their "model" did a lot better than 75% accuracy when it was "fit" retrospectively to those Terms' cases-- or else the researchers would surely have tinkered with its parameters all the more.  But that the resulting model performed no better than chance (i.e, than someone uniformly picking "reverse," the most likely result in the training data) when applied prospectively to a new sample is a resounding verdict of “useless” for the algorithm the researchers derived by those means.

Sure, the “experts” did even worse, and that is embarrassing: it means they'd have done better by not "thinking" and instead just picking "reverse" in all cases, the strategy a non-expert possessed only of knowledge of the Court's lopsided proclivity to overturn lower court decisions would have selected.

But the "experts'" performance—a testament perhaps to their hubristic overconfidence in their abilities but also largely to the inclusion of many law professors who had no general specialty in Supreme Court decisionmaking —doesn’t detract from the conclusion that the statistical model they were up against was a complete failure.

What’s more, I don’t think there’s anything for Lexy or computers generally to feel embarrassed about in this matter. After all, a computer didn’t program Lexy; a group of humans did.

The only thing being tested in that regard was the adequacy of the “ideology thesis”-prediction model developed by political scientists.

That the researchers who published this study either didn’t get or didn’t care that this model was shown to perform no better than chance makes them the ones who have the most to be embarrassed about.

References 

Edwards, H.T. & Livermore, M.A. Pitfalls of empirical studies that attempt to understand the factors affecting appellate decisionmaking. Duke LJ 58, 1895-1989 (2008).

Hofer R. Supreme Court Reversal Rates: Evaluating the Federal Courts of Appeals, Landslide 2, 10 (2010). 

Llewellyn, K.N. The Common Law Tradition: Deciding Appeals (1960).


Long, J.S. Regression models for categorical and limited dependent variables (Sage Publications, Thousand Oaks, 1997).

Margolis, H. Dealing with risk : why the public and the experts disagree on environmental issues (University of Chicago Press, Chicago, IL, 1996).

Margolis, H. Patterns, Thinking, and Cognition (1987).

Pampel, F.C. Logistic regression : a primer (Sage Publications, Thousand Oaks, Calif., 2000).

Ruger, T.W., Kim, P.T., Martin, A.D. & Quinn, K.M. The Supreme Court Forecasting Project: Legal and Political Science Approaches to Predicting Supreme Court Decisionmaking. Columbia Law Rev 104, 1150-1210 (2004).

Long, J.S. Regression models for categorical and limited dependent variables (Sage Publications, Thousand Oaks, 1997).

 

 

Thursday
Jan012015

"...but that just doesn't happen!..." Or: "Who is the 'Pakistani Dr' now?"--a fragment on the professional judgment of law professors

 From correspondence with a friend & collaborator of preternatural intelligence and critical reflection; in response to her rejection of a "proof," presented in the form of a computer simulation, of the "impossibility" of using "rules of evidence" to conform adversary adjudication to the goal of rational truth seeking:

Extravagance.  "Oh, but this just doesn't happen -- look at the cases!"  Really?  It's in the nature of the phenomenon not to be directly observable. If we are committed to rational truth seeking, we should be trying to figure out how to create observations of influences we wouldn't detect in the normal course but that in fact undermine our conclusions about what we are seeing.  In any case, everything I have ever observed (when I summon the will to observe; like you, like everyone else, I am trained not to) tells me that this is exactly what effective trial advocacy is about.  A trial is not a conveyor belt onto which pieces of evidence are added to be processed down the line by a Bayesian proof aggregator.  It is a violent struggle from the start to impose a narrative template, to which the factfinder can be expected to mold every piece of proof.  The forms of information processing that lawyers anticipate and jockey to grab hold of and point in the desired direction are hostile to accurate factfinding -- deeply hostile to it.  The idea that "trials work just fine, especially with a little fine tuning w/ rules of evidence that anticipate cognitive biases" is a 2nd order form of flawed information processing that occurs in those officially certified to play the role of critical examiners of the system; that they end up saying exactly that, moreover, helps to insulate the flaws even more securely from the truly unbearable realization that we are making people's lives depend on an arbitrary game.  Or in any case, this what I believe "at home"; "at work" I, too, believe the system is perfectly rational.

 

Monday
Dec292014

More on Hameed's "Pakistani Dr" -- "explaining contradictory beliefs" begs the question

Just because I haven't been writing about him all the time in this forum doesn't mean I've stopped thinking about Hameed's "Pakinstani Dr," the paradigm case of "dualism" or "knowing disbelief" or whathaveyou.  On the contrary, longish periods of inactivity in this blog can be explained by the days at a time I spend  in bed (except for a 12-mile run @ about 10:30 or 11:00 pm), unable to overcome the sense of anomie I experience as a result of not having a satisfactory account (just a decent provisional one, of course) of what is going on in his head .... But today I'm up -- in part b/c Ann Richards was biting my nose (she should learn to feed herself; is that too much to ask?) --& engaged in a bit of email correspondence in which I described the state of my thinking about the "knowing disbelief/dualism" issue this way to a colleague: 

I'm pretty obsessed right now w/ trying to comprehend/identify/test the mechanisms that can generate in people's minds coexisting states of belief & nonbelief in evolution or climate change. The paradigm case would be Hameed's Pakistani Dr., who "disbelieves in" evolution "at home" but "believes in" it "at work."

All the explanations that people are inclined to give-- ones involving  "compartmentalization & dissonance avoidance," insincerity,  "misconstrual," "divided selves" etc-- assume that what's in need of explanation is the holding of contradictory beliefs.  I think that's a mistake -- or at least begs the question.

The question is how to individuate  the "factual proposition" (or for simplicity, just "fact") that is the object of the subject's "belief" or knowledge.  

The standard explanations of the Pakistani Dr  all assume that the "fact" is defined exclusively w/ reference to some state of affairs external to or independent of the subject, that is, the individual who "knows" it.  The referent for "human evolution" is "the natural history of human beings as described by evolutionary science."  So if someone "believes" & "disbelieves" in human evolution, they are manifesting opposed or contradictory intentional states toward the fact of human evolution.

My hunch is that the "fact" that is object of knowledge or belief must in addition be defined in relation to the contribution that knowing or believing it makes to some end or goal of the subject.

Individuals have many goals. More than one can be bundled with a fact defined w/ reference to some external state of affairs.

E.g., the Pakistani Dr, an oncologist, can "know" or "believe in" evolution in order to determine the risk his patient will develop breast cancer; he can also "know" or "believe in" it in order to participate in the sense of identity he experiences as a member of a profession that generates knowledge beneficial to humanity ("stem cell research-- brilliant!")

It turns out that the Pakinstani Dr also "disbelieves in" human evolution,  knows it to be false, in order to be a member of a community that subscribes to an alternative account of the natural history of human beings.  

So is there a "contradiction" in his "beliefs"?

Well, luckily for him, the Pakistani Dr's goal or end of being a member of that community is not incompatible with the goal of doing oncology or being a member of the medical profession (things could surely be otherwise--are, sadly, becoming otherwise in Europe, Hameed shows in his most recent paper).  Thus, for the Pakistani Dr there is no contradiction between his "belief in" & "disbelief in" in evolution when the objects of those mental states are defined jointly by reference to "the natural history of human beings as described by evolutionary science" and by the goals that are promoted by believing/disbeliving in that.  

He keeps trying to tell us this: "yes, yes," they both relate to "Darwin's theory," he notes with exasperation, but the "evolution" he "accepts" and the "evolution" he "rejects" are "entirely different things!"  We keep staring back uncomprehendingly...

I think it is important to get straight about this pragmatic-dualist account of the Pakistani Dr's beliefs -- about how it differs from all the ones that assume "contradiction" & try to explain it; about whether it is right; about what to think of the role it plays and can play in socieites that are trying to negotiate "Popper's Revenge..."

Likely someone somewhere has worked all this out already! I keep asking people for directions; they do helpfully point me down one path or another -- & I'm grateful. But no doubt as a result of my own imperfect navigation skills, I still feel very much lost ...
Thursday
Dec252014

What “bodycams” can and can’t be expected to do. . . plus coolest study of the year

I definitely favor police “bodycams” as a means of promoting  greater police accountability to the public and greater public confidence in their police.

But there’s a pretty straightforward reason why bodycams won’t prove to be a silver bullet in the effort to subdue societal conflict over excessive police force: perceptions of who did what to whom in such disputes are among the class of factual beliefs influenced by cultural cognition.

When it comes to the impact of cultural cognition, there’s nothing special about brute sense impressions.

Indeed, the foundational study of motivated reasoning—of which cultural cognition is one form—involved distortion of visual perception.  Described in Hastorf & Cantril's 1954 paper, “They Saw a Game,” the experiment showed that students from rival colleges formed opposing perceptions of disputed officiating calls featured in a film of a football game between their schools.  The students' stake in experiencing solidarity with their classmates, researchers concluded, had unconsciously influenced what they saw when viewing the film.

Whether the police can be trusted to refrain from abusing their authority turns on a host of disputed facts symbolically identified with membership in important cultural groups.  Accordingly, the stake that individuals have in experiencing and expressing solidarity with those groups can likewise be expected to unconsciously shape what they see when they view filmed depictions of violent police-citizen interactions.

People who remember the divided reactions to the Rodney King video probably have a sense of that—although, in fact, when people are experiencing this sort of cognitive dynamic, they tend to notice its impact only on those with whom they disagree and not on themselves (Robinson,  Keltner, Ward,  & Ross 1995).

But there is also experimental evidence corroborating the impact of cultural cognition on visual perceptions of behavior in police-citizen confrontations.

These include two CCP studies:  They Saw a Protest (Kahan, Hoffman,  Braman,  Evans, &  Rachlinski 2012) which involved a film of police and political protestors who were variously characterized as demonstrating against abortion rights or against the military’s “don’t ask, don’t tell policy”; and Whose Eyes Are You Going to Believe (Kahan, Hoffman & Braman 2009), which involved film shot from inside a police cruiser that deliberately rammed that of a fleeing suspect.

In both instances, the studies found that what the subjects reported observing --protestors blocking access to a building or people shamed into avoiding entry; a driver veering wildly into lanes of oncoming traffic or police "taking out" a motorist for defying their will -- depended on the subjects' cultural identities. 

But the coolest study on motivated reasoning and perceptions of police force was featured in an article that just came out,  Justice is not blind: Visual attention exaggerates effects of group identification on legal punishment (Granot, Balcetis,  Schneider, & Tyler 2014).

Indeed, GBST  is for me the run-away winner in the contest for “coolest study of the year.”

Actually, GBST reported the results of two related  studies. In one, the researchers correlated perceptions of a violent citizen-police encounter with subjects’ moral predispositions toward the police generally.

In the other, the researchers correlated the subjects’ group membership with perceptions of the behavior of two brawling private citizens, who were identified variously as belonging either to the subjects’ group or to a rival one.

The super cool part of the study was that the researchers used an eye-tracking instrument to assess the predicted influence of motivated reasoning on the perceptions of the subjects.

Collected without the subjects’ awareness, the eye-tracking data showed that subjects fixed their attention disproportionately on the actor they were motivated to see as the wrongdoer—e.g., the police officer in the case of subjects predisposed to distrust the police in study 1, or the citizen identified as an “out-group” member in study 2.

Wow! 

Before reading this study, I would have assumed the effect of cultural cogntion was generated in the process of recollection: that people were fitting bits and pieces of recalled images onto narrative templates featuring police force and the like (cf. Penningon & Hastie 1991, 1992)

But GBST's findings suggest the dynamic that generates opposing perceptions in these cases commences much earlier, before the subjects even take in the visual images.  

The identity-protective impressions people form originate in a kind of biased sampling: by training their attention on the actor who they have the greatest stake in identifying as the wrongdoer, people are--without giving it a conscious thought, of course--prospecting in that portion of the visual landscape most likely to contain veins of data that fit their preconceptions.

Sadly, the benefit of gaining this remarkable insight into the workings of motivated cognition comes at the cost of intensified despair over the prospects for resolving societal conflicts over the appropriateness of the use of violent force by the police.

These disputes look like ones that could be resolved if we only had more information about the facts.  Hence the proposal that the police wear bodycams.

But this understanding has things backwards: the cultural conflict that this policy is meant to dispel will in fact shape what people see when they watch the bodycam videos.

Thus, the full value of the bodycam video policy—which I think can be considerable—will actually depend on our dispelling the antagonistic meanings that make police-citizen encounters a focal point for cultural conflict.

But in fact, that’s part of why I support the bodycam policy. 

The policy involves a significant commitment on the part of society to monitor police, and on the part of the police themselves to make their conduct amenable to monitoring.

Accepting that obligation itself conveys a signal, to the citizens who have the most reason to doubt it, that society and the police themselves are dedicated to assuring that the police will use force appropriately—to protect rather than violate the rights of the members of the community they serve.

More than this gesture will be needed, of course, to create the conditions of reciprocal cooperation and trust necessary to vanquish the distorting influence of cultural cognition on perceptions of violent confrontations between police and individual citizens.

But it’s a good start.

References

Granot, Y., Balcetis, E., Schneider, K. E., & Tyler, T. R. (2014). Justice is not blind: Visual attention exaggerates effects of group identification on legal punishment. Journal of Experimental Psychology: General, 143(6), 2196-2208. doi: 10.1037/a0037893 

Pennington, N., & Hastie, R. (1991). A Cognitive Theory of Juror Decision Making: The Story Model. Cardozo L. Rev., 13, 519-557.

Pennington, N., & Hastie, R. (1992). Explaining the Evidence: Tests of the Story Model for Juror Decision Making. Journal of Personality and Social Psychology, 62(2), 189-206.

Hastorf, A. H., & Cantril, H. (1954). They saw a game: A case study. The Journal of Abnormal and Social Psychology, 49(1), 129-134. doi: 10.1037/h0057880

Wednesday
Dec242014

"Anyone who doesn't agree must be a Marxist!" Plus "bans," "decibans," & Turing & Good on "evidentiary weight"

Maybe this (like the honeybadger) will turn out to be one of those discoveries on my part that everyone else already knows about, thereby revealing my disturbing remoteness from the zeitgeist, but the underscored sentence struck me as sooooooo hilarious I thought I should take the risk and share it, just in case it really is a hidden gem:

Actually, the paper (Good 1994) is not nearly so esoteric as it looks. Good was a brilliant writer, whose goal was to help curious people understand complicated things--as opposed to the sort of terrible writer whose goal is to be understood as brilliant by people he knows won't be able to comprehend what he is saying (which usually is nothing very interesting). 

I came across this paper while looking for accessible accounts of Turing's usage of "bans" and "decibans," a precursor of the Bayes factor, as a useful heuristic for making the concept of "weight of the evidence" tractable (in my case for a paper on the conceit that rules of evidence can be used to correct for foreseeable cognitive biases on the part of factfinders in legal proceedings).

A "ban," essentially, is a likelihood ratio of 10. That is, we would say that a piece of evidence has a weight of "1 ban" when it made some hypothesis 10x more probable (necessarily in relation to some other hypothesis) than we would have had reason to view it without that evidence.

Turing, in working on decryption at Blatchley Park in WW II, selected the ban as a unit to guide the mechanized search for solutions to codes generated by the German "Enigma" machine. Actually, Turing advocated using "decibans," which are 1/10 of ban, to assess the probative value of potential matches between sequences of code and plain text that poured out of the "bombe" decoders, electronic proto-computers that rifled through the zillions of combinations formed by the interacting Engima rotors, the settings of which determined the encryption "key" for Enigma-encrypted messages. 

Turing judged a deciban-- again, 1/10 of a "ban" or a likelihood ratio of 1.25:1 or 5:4 -- as pretty much the smallest difference in relative likelihood that a human being was likely to be able to perceive (Good 1979).

That's an empirical claim about cognition, of course.  What evidence did Turing have for it?  None, except the vast amount of experience that he and his fellow code-breakers were accumulating as they dedicated themselves to the task of productive deciphering of Enigma messages.  That certainly counts for something --but for how much? See the value of having units some system of "evidentiary weight" units here?

Good -- a 24-yr old, freshly minted Cambridge mathematician -- was part of Turing's team.

After the war, he wrote prolifically on probability theory, and Bayesian statistics in particular, for decades. He had lots of informative things to say about the concept of "evidentiary weight" (Good 1985).  He died in 2009.

Turns out he was really funny too.

Or at least I'd say that this sentence is at least 10 ban's worth of evidence that he was.

References

Good, I. (1985). Weight of evidence: A brief survey. In J. M. Bernardo, M. H. DeGroot, D. V. Lindley & A. F. M. Smith (Eds.), Bayesian statistics 2: Proceedings of the Second Valencia International Meeting (pp. 249-270). North-Holland: Elsevier.

Good, I. (1994). Causal Tendency, Necessitivity and Sufficientivity: an updated review Patrick Suppes: Scientific Philosopher (pp. 293-315): Springer.

Good, I. J. (1979). Studies in the history of probability and statistics. XXXVII AM Turing's statistical work in World War II. Biometrika, 66(2), 393-396. 

 

 

Good circa 1974 (at Va. Tech.)

 

 

Tuesday
Dec232014

Why expect people to *know* evolution? A question that deserves a good answer

Below is a thoughtful essay form Prajwal Kulkarni, a reflective physicist who is concerned about the societal controversy over teaching evolutionary science.  In it, he asks a question that I think deserves a good answer: why do we oblige citizens to learn evolution?

I am interested in the societal controversy over evolution, too.

As Praj notes, my main concern is with how to teach evolution effectively in a polluted science communication environment.  In particular, I am concerned that certain students—mainly secondary school ones but college ones too—will be deterred from understanding the modern synthesis by their apprehension that engaging the theory and evidence supporting it will betray their cultural identities.

Great research exists showing that it is possible to disentangle identity from knowledge in the pedagogy of evolution (by recognizing, e.g., the utter pointlessness of extracting professions of "belief" in what is being taught).  Good teachers know how to free curious students from the choice between knowing what’s known by science and being who they are as members of communities with diverse understandings of the meaning of life. 

Science educators ought to do that, I’m convinced, because in a liberal pluralistic society all individuals, regardless of their identity, are entitled to the opportunity to acquire the insights of science as a basic or primary good.  They ought to do it, too, because the state in a liberal pluralistic society is obliged not to condition access to primary goods on free citizens’ acceptance of a partisan moral or political orthodoxy.

But this account takes as a given that it is right to teach students the rudiments of evolutionary science.  Indeed, that it is right to expect them to learn it—just as it is right to expect them to learn to read or do math. 

Students who don’t learn to read or do math, or to reason well, will not only be disadvantaged but disadvantaged through the agency of the state, which certifies their low educational attainment.

Praj is asking, I think, why we make learning the rudiments of evolutionary science bear this consequence.  Why, in particular, when we know that understanding evolution, unlike being able to read and being able to do math, is bundled with identity-threatening cultural significance and, he believes, is not as essential for success in life as either of those or myriad other forms of knowledge.

I do in fact disagree—unequivocally—with Praj’s suggestion that we don’t “need” evolution, as he puts it.

That means, necessarily, that I think there is an answer to his question.

But the one I am inclined to give him is, by my own lights, simply not as good as it should be.  The problem with it, in my view, is not that it is “wrong” or missing some quality of analytical coherence or cogency.

It’s that it doesn’t give him, or at least those whom he speaks for, something they morally deserve: a satisfying account of why in fact it is justified to visit this particular obligation on them; an account that is satisfying, in particular, because it recognizes rather than evades the profound moral difficulty and complexity of the issue at hand.  

For it truly is the case, I believe, that when we oblige people to learn—oblige in the sense of making the consequence of failing to do so a stigma that indisputably and by design constrains their prospects in life—we are coercing them.  Coercing them, moreover, to do something that, even if we succeed in the form of disentanglement I favor in the teaching of evolution, will reasonably be understood by some of them (many fewer, I’m sure, if edcuators and others observe the disentanglement principle, but still some) as incompatible with being who they are.

So I think Praj deserves not only an answer but one of a particular sort.

An aporetic one: a response that, while unequivocal in its conclusion, openly acknowledges the ineradicable complexity of the question and resists effacing the same by resort to bluster and posturing, a style that betrays a regrettable defect of intellectual character.

I am convinced that it is indeed legitimate for the state to oblige citizens to learn evolutionary science. But being able to give an aporetic answer to Praj’s question is, in my view, a condition of the legitimacy of doing so, for only an aporetic response is capable of evincing on our part respect for the freedom and reason of the individual whom we are forcing to bear this restriction on liberty. 

What's the answer, then, to Praj's question? We should all be just as impelled as Praj to know what it is.

 --Dan M. Kahan

Why should everyone learn evolution?

Prajwal Kulkarni

Hello 14 billion readers of Cultural Cognition. I'm honored to be guest-blogging. This site is a big leap from my own blog, which has a paltry 7 billion readers. 

Today I'd like to expand on Dan's post from a few weeks ago: "What I believe about teaching "belief in" evolution and climate change." This passage in particular struck me: 

It makes me sad to think that some curious student might not get the benefit of knowing what is known to science about the natural history of our (and other) species because his or her teacher made the understandable mistake of tying that benefit to a gesture the only meaning of which for that student in that setting would be a renunciation of his or her identity.  

It makes me angry to think that some curious person might be denied the benefit of knowing what's known by science precisely because an "educator" or "science communicator" who does recognize that affirmation of "belief in" evolution signifies identity & not knowledge nevertheless feels that he or she is entitled to exactract this gesture of self-denigration as an appropriate fee for assisting someone else to learn. 

Such a stance is itself a form of sectarianism that is both illiberal and inimical to dissemination of scientific knowledge.

 I strongly agree with Dan on these points. But I'm going to take his last sentence one step further. Not only is it illiberal to insist students profess "belief in" evolution, it may be illiberal to force them to learn it in the first place. It's not obvious--to me at least--why learning evolution is mandatory. To see why, it might help to step back and look at science education more broadly. 

Imagine a world where the theory of evolution was not the lightning rod that it is. Even in that world, we could ask some general questions about science education and public science literacy: Who needs science education? What does it mean to be scientifically literate? Are there different definitions for scientists and non-scientists? 

While I’m not an expert, I have read a fair amount of the research on public understanding of science. Much of what I've read divides children into two groups: future scientists and engineers, and everyone else. Obviously these are not hard boundaries, and academics disagree if and where to draw lines. But it’s widely agreed that these groups are distinct and it’s tricky to balance both of their needs. Science literacy has a different meaning for physicists than for those in sales or marketing. 

So given that the overwhelming majority of students will not pursue careers in science and engineering, why should everyone be forced to learn natural selection if they’ll never use it after high-school? Before answering this question, it might be helpful to first reflect on what we want non-scientists to do with their scientific knowledge. What purposes do public science literacy serve?

You can spend a lifetime reading the scholarship on just this one question. My personal favorite is a 1975 article by astrophysicist Benjamin Shen. Shen outlines three categories of science literacy: practical, civic, and cultural. Science in the first category helps people in their daily lives, and includes topics like nutrition, health, and agriculture. The second would help people make informed civic decisions, while the third is in the same spirit as Shakespeare or Greek mythology.

To Shen’s categories I’ll add my own three-legged stool. Science education should leave non-scientists with some content knowledge (i.e. scientific facts), some understanding of scientific methods, and some sort of appreciation for and engagement with science. But I’m not sure specifically what content, how much process, and how to best cultivate appreciation. As far as I know, the experts aren’t sure either.

We’re now ready to return to evolution. Let’s adopt Shen’s framework, and remember that we’re focusing on non-scientists. I’ll repeat my question: why teach the theory of evolution in the first place? It has very little, if any, practical value. (Quick: when’s the last time you used the theory of evolution to help you decide anything?) It has almost no relevance to public policy. (Quick: when’s the last time the press covered the theory of evolution outside of creationism or intelligent design?) We’re left with the cultural value of evolution, admittedly a powerful justification.

Education is important for more than utilitarian reasons like economic growth. It helps promote civic virtues, patriotism, a sense of national identity, and a common culture (see Chapter 8 here). Science education can align with these goals.

But there are limits to how far we can push this argument, and cultural cohesion does not automatically trump individual rights. The landmark West Virginia v. Barnette, for example, declared that children cannot be forced salute the flag if doing so violates their–or their parents’–conscience. What if learning or believing evolution violates some parents’ conscience? Is there really a compelling state interest that everyone must learn it? If we grant exemptions to the Pledge of Allegiance, then why can't we grant exemptions to certain types of knowledge?

I would think educators and scientists would be open to different ways of teaching biology, especially since cultivating “scientific thinking” is often viewed as much more important than any specific content. It’s almost a truism: facts are less important than understanding the process of science and its ways of thinking. So if it’s scientific thinking we’re really after, why not spend an entire year studying human anatomy? Or maybe substitute evolution for a unit on bioengineering or a more in-depth look at organic chemistry. Unless the theory of evolution and nothing else in science teaches people to “think scientifically”, surely there are many ways to get there. A survey course in biology (what I and most people I know had) is not the only possible approach.

My goal in this post wasn't to convince you that evolution can safely be dropped from the science curriculum. I do hope, however, I've convinced you that there can be legitimate disagreement on whether it should be mandatory for all students. I do hope I've convinced you that there are tradeoffs--between freedom of conscience and public education, between science education for future scientists and non-scientists, and among different educational and pedagogical goals. I do hope I've convinced you that maybe there's much more to biology education than the theory of evolution.

References

Shen, Benjamin. Science literacy and the public understanding of science. Communication of Scientific Information, 44 – 52 (1975).

Wednesday
Dec172014

We need a CRT 2.0! And IRT should be used to develop it

I really really really like the Cognitive Reflection Test--or "CRT" (Frederick 2005).

The CRT is a compact three-item assessment of the disposition to rely on conscious, effortful, "System 2" reasoinng as opposed to rapid, heuristic-driven "System 1" reasoning.  An objective or performance-based measure, CRT has been shown to be vastly superior to self-report measures like "need for cognition" ("agree or disagree-- 'thinking is not my idea of fun'; 'The notion of thinking abstractly is appealing to me' . . .") in predicting vulnerability to the various biases that reflect over-reliance on System 1 information processing  (Toplak, West & Stanovich 2011).

As far as I’m concerned, Shane Frederick deserves a Nobel Prize in economics for inventing this measure every bit as much Daniel Kahneman deserved his for systematizing knowledge of the sorts of reasoning deficits that CRT predicts.

Nevertheless, CRT is just not as useful for the study of cognition as it ought to be. 

The problem is not that the correct answers to its three items are too likely to be known at this point by M Turk workers—whose scores exceed those of MIT undergraduates (Chandler, Mueller & Paolacci 2014).

This is what CRT score distribution looks like when test is administered to normal people (i.e., not M Turk workers, ivy league college students, people who fill out surveys at on-line sites that solicit study subjects who want to learn their CRT scores, etc)Rather the problem is that CRT is just too darn hard when used to study legitimate study subjects.

The mean score when it is administered to a general population sample is about 0.65 correct responses (Kahan 2013; Weller, Dieckmann, Tusler, Mertz, Burns & Peters 2012; Campitelli & Labollita, 2010).

The median score is 0.

Accordingly, if we want to study how individual differences in System 1 vs. System 2 reasoning styles  interact with other dynamics—like motivated reasoning—or respond to interventions designed to improve engagement with technical information, then for half the population CRT necessarily gives us zero information.

Unless one makes the exceedingly implausible assumption that there's no variance to measure among this huge swath of people, this is a severe limitation on the value of the measure.

I've addressed this previously on this blog but I had occasion to underscore and elaborate on this point recently in correspondence with a friend who does outstanding work in the study of cognition and who (with good reason) is a big fan of CRT.

Here are some of the points I made:

I don’t doubt that CRT measures the disposition to use System 2 information processing more faithfully than, say, Numeracy [a scale that assesses proficiency in quantitative reasoning]. 

But the fact remains that Numeracy outperforms CRT in predicting exactly what CRT is supposed to predict—namely vulnerability to heuristic biases (Weller et al. 2012; Liberali 2012). Numeracy is getting a bigger piece of the latent disposition that CRT measures—and that's strong evidence of the need for a better CRT.

Or consider the Ordinary Science Intelligence assessment, “OSI_2.0,” the most recent version of a scale I've been working on to measure a disposition to recognize and give appropriate effect to scientific information relevant to ordinary, everyday decisions (Kahan 2014).  

Cognitive reflection is among the combination of reasoning proficiencies that this (unidimensional) disposition comprises.

But for sure, I didn't construct OSI_2.0 to be "CRT_2.0.”  I created it to help me & others do a better job in investigating how to asses the relationship between science comprehension and dynamics that constrain the effectiveness of public science communication.

With Item Response Theory, one can assess scale reliability continuously along the range of the underlying latent disposition (DeMars 2010).  Doing so for OSI_2.0, it can be seen that what CRT contributes to OSI_2.0’s measurement precision is concentrated at the very upper end of the range of the "ordinary science intelligence" aptitude:

 

This feature of CRT can be shown to make CRT less effective at what it is supposed to do—viz., predict individual differences in the disposition to resist over-reliance on heuristic processing.

The covariance problem is considered diagnostic of that sort of disposition (Stanovich 2009, 2011). Those vulnerable to over-reliance on heuristic processing tend to make snap judgments based on the relative magnitudes of the numbers in “cell A” and either “cell B” or “cell C” in a 2x2 contingency table or equivalent. Because they don't go to the trouble of comparing the ratio of A to B with the ratio of C to D, people draw faulty inferences about the significance of the information presented (Arkes & Harkness 1983).

As it should, CRT predicts resistance to this bias (Toplak, West & Stanovich 2011).

But not as well as OSI_2.0.

Consider:

These are scatter plots of performance on the covariance problem (N = 400 or so) in relation to OSI_2.0 & CRT, respectively, w/ lowess regression plots superimposed.

The crook in  profile of the OSI_2.0 plot compared to the flat, boring profile of CRT shows that the former has superior discrimination (that is, identifies in a more fine-grained way how differences in reasoning ability affect the probability of getting the right answer).

Relatedly, the interspersing of the color-coded observations on the OSI_2.0 scatter plot shows how CRT is dividing people into groups that are both under- & over-inclusive w/r/t to proficiencies that OSI_2.0 is sorting out more reliably.

Or more concretely still, if I had only CRT, then I'd predict that  there is only a 40% probability that someone who is +1 on OSI_2.0-- just short of "1" on CRT -- would get the covariance problem correct, when in fact the probability such a person will get the right answer is about  60%. 

Similarly, if I used CRT to predict how someone at +1.5 on OSI_2.0 is likely to do on the problem, I'd predict about a 50% probability of him or her selecting the correct response -- when in fact the probability of a correct response for that person is closer to 75%.

Essentially, I'm going to be as satisfied with CRT as I am in OSI_2.0 only if  my interest is to predict performance of those who score either 2 or 3 on CRT -- the 90th percentile or above in a general population sample. 

But as can be seen from the OSI_2.0 scatter plot, it’s simply not the case that there’s no variance in people’s vulnerability to this particular heuristic bias in the rest of the population.  A measure that can't enable examination of how so substantial a fraction of the population thinks should really disappoint cognitive psychologists, assuming their goal is to study critical reasoning in human beings generally.

click on me-- your CRT score will instantly jump 2 points!Now, it's absolutely no surprise that OSI_2.0 dominates CRT in this regard: the CRT items are all members of  the OSI_2.0 scale, which comprises 18 items the covariance structure of which is consistent with measurement of a unidimensional latent disposition.  So of course it is going to be a more discerning measure of whatever it is CRT is itself measuring -- even if CRT_2.0 isn't faithfully measuring only that, as CRT presumably is. 

But that’s the point: we need a “better” CRT—one that is as tightly focused as the current version on the construct the scale is supposed to measure but that gets at least as big a piece of the underlying disposition as OSI_2.0, Numeracy or other scales that outperform CRT in predicting resistance to heuristic biases.

For that, "CRT 2.0" is going to need not only more items but items that add information to the scale in the middle and lower levels of the disposition that CRT is assessing.  IRT is much more suited for identifying such items than are the methods that those working on CRT scale development now seem to be employing.

I could certainly understand why a researcher might not want a scale with as many as 18 items. 

But again IRT can help here: use it to develop a longer, comprehensive battery of such items, ones that cover a large portion of the range of the relevant disposition.  Then administer an "adaptive testing" battery that uses strategically selected subsets of items to zero in on any individual test-taker’s location on the range of the measured “cognitive reflection” disposition (DeMars 2010).  Presumably, no one would need to answer From Mueller, Chandler, & Paolacci, Soc'y for P&SP, 1/28/12more than half dozen in order to enable a very precise measure of his or her proficiency -- assuming one has a good set of items in the adaptive testing battery.

Anyway, I just think it is obvious that researchers here can and should do better--and not just b/c MTurk workers have all learned at this point that the ball costs 5 cents!

References

Arkes, H.R. & Harkness, A.R. Estimates of Contingency Between Two Dichotomous Variables. J. Experimental Psychol. 112, 117-135 (1983).

Campitelli, G. & Gerrans, P. Does the cognitive reflection test measure cognitive reflection? A mathematical modeling approach. Memory & Cognition, 1-14 (2013).

Chandler, J., Mueller, P. & Paolacci, G. Nonnaïveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers. Behavior research methods 46, 112-130 (2014).

DeMars, C. Item response theory (Oxford University Press, Oxford ; New York, 2010).

Frederick, S. Cognitive Reflection and Decision Making. Journal of Economic Perspectives 19, 25-42 (2005).

Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013). 

Kahan, D.M. "Ordinary Science Intelligence: A Science Comprehension Measure for Use in the Study of Science Communication, with Notes on "Belief in" Evolution and Climate Change. CCP Working Paper No. 112 (2014).

Liberali, J.M., Reyna, V.F., Furlan, S., Stein, L.M. & Pardo, S.T. Individual Differences in Numeracy and Cognitive Reflection, with Implications for Biases and Fallacies in Probability Judgment. Journal of Behavioral Decision Making (2011).

Stanovich, K.E. Rationality and the reflective mind (Oxford University Press, New York, 2011).

Stanovich, K.E. What intelligence tests miss: the psychology of rational thought (Yale University Press, New Haven, 2009).

Toplak, M., West, R. & Stanovich, K. The Cognitive Reflection Test as a predictor of performance on heuristics-and-biases tasks. Memory & Cognition 39, 1275-1289 (2011).
 

Weller, J.A., Dieckmann, N.F., Tusler, M., Mertz, C., Burns, W.J. & Peters, E. Development and testing of an abbreviated numeracy scale: A Rasch analysis approach. 

Saturday
Dec132014

Weekend update: More on the wisdom of SE Fla's *political science* of climate change

From correspondence with friends . . .

As you know, I think the science-communication brilliance of the SE Fla Regional Climate Compact is its recognition that constructive public engagement w/ climate change doesn't depend on identifying "magic words" or "frames," or on finding charismatic "conservative messengers" (Hank Paulsen? seriously?).

Rather it depends on creating and protecting a conversation that enables diverse citizens to apply their reason to protecting their shared way of life as opposed to a conversation that forces them to use their reason to protect the status of their particular cultural group & their own personal standing within it....

Subject of course as always to revising my understanding as I learn more, I'm convinced that that the mission for serious empirical researchers is now to help all those who don't yet get the "right conversation"principle to understand the importance of it; & to help those who already do get it to have all the information they need to create & protect that conversation as effectively as they can.

Here is something that strikes me as produced by some smart folks in the latter category. I've had some occasion to observe, w/ both casual & structured empirics, what is going on in Australia. I think Australia is as close to US as any other country (I love watching their tv shows)! And not surprisingly, it has its own SE Floridas.


Wednesday
Dec102014

Project disentanglement ... a fragment

from something I'm working on . . . 

Project Disentanglement

I. CCP is currently involved in a series of interlocking initiatives. Spanning a variety of  settings, these initiatives are animated by a common objective: the extrication of science from cultural conflict by use of the science-communication disentanglement principle.

II. Cultural conflict over what is known by science is not the norm. It occurs only when risks and like facts become entangled in antagonistic cultural meanings, which effectively transform positions on them into badges of membership in opposing groups. In such circumstances, the interest individuals have in protecting their connections to others with whom they share important social ties can exceed the personal stake they have in forming beliefs consistent with the best available evidence. It can thus become individually rational—albeit collectively disastrous—for people to use their reason to maintain beliefs consistent with the ones predominant in their cultural groups (Kahan 2012). Indeed, polarization rooted in this dynamic—known as identity-protective cognition—is most intense among individuals  highest in science literacy (Kahan 2013; Kahan, Peters et al. 2012).

III. The only means to neutralize identity-protective cognition is to dispel the conflict cultural diverse individuals experience between recognizing valid science and forming beliefs that express their defining commitments (Kahan in press). The disentanglement principle describes the fundamental imperative of effective science communication under such circumstances: to protect reasoning individuals from having to choose between knowing what is known by science and being who they are.

IV. “Project disentanglement” is dedicated to enabling science communication professionals to implement the disentanglement principle. The Project contemplates two sets of complementary practical research initiatives.

V. The climate-science education initiative will focus on teaching of climate science at the secondary-school level. The disentanglement principle is in fact derives from classic studies on teaching evolution to high school students (Lawson & Worsnop 2006). Such research showed it was possible—indeed, indispensable—to divorce the opportunity to learn  evolutionary science from the psychological experience of being forced to “assent” to propositions inimical to religious students’ defining commitments.

The climate-science education initiative will adapt these techniques to secondary-school climate-science education. The same tension between recognizing what’s known to science and maintaining fidelity to defining cultural commitments is now widely recognized as threatening education in this critical area of science, too. Working with education researchers, CCP is devising project-based learning materials, on the theory that rooting instruction in familiar local issues is distinctively suited to disentangling climate-science  knowledge from the antagonistic meanings that pervade the climate debate nationally.

VI. While intrinsically valuable, the climate-science education initiative is also expected to generate insights of value for research on the disentanglement principle in local political decisionmaking. The evidence-based science communication initiative is committed to furnishing science-communication support services to local governments pursuing adoption of environmental and conservation policies (Kahan 2014).

Communication strategies featuring the disentanglement principle have been the central focus of  the Southeast Florida Science Communication Initiative, a collaborative partnership between CCP and the Southeast Florida Regional Climate Compact. The four member Counties (Broward, Miami-Dade, Monroe, and Palm Beach) have generated widespread public support for a multifaceted Climate Action Plan despite the high degree of cultural polarization that characterizes public opinion on climate change in the region, just as it does in the rest of the U.S. (Kahan in press).

Just as we anticipate that insights gleaned from the climate-education initiative can be used to advance the aims of programs like the Southeast Florida Evidence-based Science Communication Initiative, so we believe that research in the setting of local decisionmaking can support development of effective climate-science education in secondary schools. Indeed, appropriate project-based learning programs in area high-schools can be seamlessly integrated into larger science-communication packages used to support public engagement with valid science in local decisionmaking. Positive impressions of the effectiveness of project-based learning can materially contribute to the disentanglement of scientific knowledge and identity in the community at large as its diverse members deliberate on how to meet the environmental challenges the face.

VII.  The toll that a polluted science communication environment exacts on human reason is in fact one of the principal impediments to the use of science to protect our natural environment.  But we can use reason to protet reason. Research on the science-communication disentanglement principle is critical to the development of a new ethos of science communication environment protection.

References

Kahan, D. (2012). Why we are poles apart on climate change. Nature, 488, 255.

Kahan, D. M. (2013). Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making, 8, 407-424.

Kahan, D. M. (2014). Making Climate-Science Communication Evidence-Based—All the Way Down. In M. Boykoff & D. Crow (Eds.), Culture, Politics and Climate Change. New York: Routledge Press.

Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2, 732-735.

Kahan, D.M.(in press).Climate science communication and the Measurement Problem. Advances in Pol. Psych.

Lawson, A. E., & Worsnop, W. A. (2006). Learning about evolution and rejecting a belief in special creation: Effects of reflective reasoning skill, prior knowledge, prior belief and religious commitment. Journal of Research in Science Teaching, 29(2), 143-166.

 

Friday
Dec052014

CCP's "Evidence-based Science Communication Initiative" (EBSCI)

In recent years, the field of science communication has been marked by both progress and frustration.  On one hand, basic research has yielded a wealth of new insights into the processes by which scientific information is acquired and interpreted by the public.  On the other, increasingly elaborate and costly initiatives to communicate scientific information have spectacularly failed to dispel cultural conflict over climate change and other disputed science issues.

The reason the science of science communication is yet to generate real-world benefits, we believe, is that it is yet to genuinely set foot in the real world.

Click to read more ...

Tuesday
Dec022014

On (confused, confusing) "belief-fact" distinction -- a fragment

From revised version of The Measurement Problem: 

 As used in this paper, “believe in” just means to “accept as true.” When I use the phrase to characterize a survey item relating to evolution or global warming, “belief in” conveys that the item certifies a respondent’s simple acceptance of, or assent to, the factual status of that process without assessing his or her comprehension of the evidence for, or mechanisms behind, it. I do not use “belief in” to align myself with those who think they are making an important point when they proclaim that evolution and climate change are not “mere” objects of “belief” but rather “scientifically established facts.” While perhaps a fitting retort to the schoolyard brand of relativism that attempts to evade engaging evidence by characterizing an empirical assertion as “just” the “belief” or “opinion” of its proponent,  the “fact”–“belief” distinction breeds only confusion when introduced into grownup discussion. Science neither dispenses with “belief” nor distinguishes “facts” from the considered beliefs of scientists. Rather, science treats as facts those propositions worthy of being believed on the basis of evidence that meets science’s distinctive criteria of validity. From science’s point of view, moreover, it is well understood that what today is appropriately regarded as a “fact” might not be regarded as such tomorrow: people who use science’s way of knowing continuously revise their current beliefs about how the universe works to reflect the accumulation of new, valid evidence (Popper 1959).


Monday
Dec012014

Distrust of "trust in science" measures--crisis solved? 

As interesting things come in over the transom, I put them in a pile--right next to the transom--marked "to read." 

At this point, the pile is taller than the transom itself! I'm not joking!

And just this second I have descended the ladder after placing this newly arrived item on top of the pile:

Trust in science and scientists can greatly influence consideration of scientific developments and activities. Yet, trust is a nebulous construct based on emotions, knowledge, beliefs, and relationships. As we explored the literature regarding trust in science and scientists we discovered that no instruments were available to assess the construct, and therefore, we developed one. Using a process of data collection from science faculty members and undergraduate students, field testing, expert feedback, and an iterative process of design, we developed, validated, and established the reliability of the Trust in Science and Scientist Inventory. Our 21-item instrument has a reliability of Cronbach's alpha of .86, and we have successfully field-tested it with a range of undergraduate college students. We discuss implications and possible applications of the instrument, and include it in the appendix.

At the present rate, I should be able to read it by April 22, 2019.

But I'm sort of eager to know what it says sooner than that.  That's because of all the recent discussion arising from recent posts (e.g., here, here, here, & here) on "trust in science"/"confidence in science"/"anti-science"/"we all love science!" measures.

The upshot of all that discussion seems, in my mind at least, to be this: there just isn't any validated measure of "trust in science/scientists" item or scale of the sort that one could use to support reliable inferences in a correlational study.  

Us vs. them: we all love science!!!!!! (click & see)There are, on the one hand, a bunch of "general science affect measures" ("on a scale of 1 to a billion, how 'cool' is science?"; "on a scale of 10^45 to 10^97, how much do you love science?") that all seem to show that everyone, including "anti-science" conservatives and religious fundamentalists who deny the earth goes around the sun, reveres science.

On the other, there are "domain-specific science affect measures" that ask "how much do you trust scientists who say things like global warming is happening/gm foods are yummy/what's good for 'GM' [i.e., General Motors] is good of Amerika" etc. These find, not surprisingly, that the answer depends on what one's attitude is toward global warming/gm foods, industry etc. That's because domain-specific trust items are measuring the same thing as items that measure attitudes toward (including "risk perceptions of") the thing in question: namely, some general affective, yay-or-boo orientation toward whatever it is (global warming, gm foods, industry, etc).

Proposed survey item: "This figure shows (A) 'we all love science,' (B) 'dramatic decline' in conservative 'trust' in science, or (C) researchers need better 'trust in science' measures." Click to respond--& see how your choice match up against others (assuming you aren't first person to click)People who are passionate about the hypothesis that "distrust in science" explains controversy over science-informed policy issues such as, oh, global warming, distrust the "general affect" measures; they are "missing" some more subtle form of ambivalence, they conjecture, that people won't admit to or necessarily even be able to detect through self-inspection.

A reasonable reaction, certainly.

But there's a problem if those same people then whip out data using the "domain-specific affect" measures to support their view.  Because in that case, the evidence that "distrust in science or scientists" causes one or another science-informed policy controversy among "Hierarchs" & "egalitarians," "Republicans" & "Democrats," "born again Christians" & "atheists" -- persons who all swear they love science-- will consist of a correlation between two measures of one and the same thing.

That's called a tautology, which can be useful for some things but not for drawing causal inferences.

So is there anyway out of this dilemma?

Anway to solve this crisis of confidence/erosion of trust in measures of "distrust" in science/scientists?

Maybe this study is the solution!

But like I said, it'll be years before I can figure that out on my own (if I ever do; it's only a matter of time before the pile of materials sitting next to the transom topples over and crushes me . . . ).

Can any of you, the 14 billion readers of this blog, help out me & all the others too busy to get to this interesting looking study right now by taking a look & filing a report in the comments?

Thanks, fellow citizens!

Friday
Nov282014

Group conflict and risk perceptions: two accounts

This is just the first post in a series to address a very small question that I’m sure we can quickly dispose of.

But here’s the question:

I’m sure the vast majority of you need no further explanation.  But for newbies, this is a “tweet” from “Fearless Dave” Ropeik, the public risk perception expert who correctly believes it is irrational to worry about anything.  Likely you all remember the discussion we recently had about how Fearless Dave had his kids go over & play with the nextdoor neighbors’ children when they had Ebola because he figured it was much better for his kids to get the disease when they were young than when they were grown ups.  Of course—this is the perfect System 2 rationality we all aspire to!

But anyway, what he’s asking is—why do cultural affinities (like being an “egalitarian communitarian” as opposed to a “hierarch individualist”) make such a big difference in perceptions of the risk of climate chanage, or owning a handgun, or nuclear energy?

Fearless Dave doesn’t mean why as in “what are the mechanisms that generate such big disparities in the proportion of people of one type who believe that human beings are heating up the climate & the proportion of another type who believe that?”; he’s quite familiar with (and a very lucid expositor and insightful interpreter of) all manner of work on risk perception, including the research that shows how people of opposing identities conform all manner of information—from their intepretation of data to their assessments of arguments to their perception of the expertise of scientists to what they observe with their own eyes—to the position that predominates in their group.

What he wants to know is why these cognitive mechanisms are connected to group identities.  Why are people so impelled to fit their views to their groups'? And why do the groups disagree so intently?

Is there, Fearless Dave wonders, some sort of genetic hard wiring having to do with the evolutionary advantages, say, that “Democratic” or “nonreligious” cavepeople & “Republican” “religious” cavepeople got from forming opposing estimates of the risk of being eaten by a a sabre tooth tiger on the savannah--and then going to war w/ each other over their disagreement?

Really good question.

I don’t know.

But I and a few others twitterers offered some conjectures:

Now probably this exchange needs no explanation either.

But basically, I and Jay Van Bavel are disagreeing about the reason cultural identities generate conflicting perceptions of risk and like facts.  

Or maybe we aren’t.  It’s hard to say.

While Twitter is obviously the venue most suited for high-quality scholarly interaction, I thought I’d move the site of the exchange over to the CCP Blog--so that you, the 12 billion regular readers of this blog (for some reason 2 billion people unsubscribed after my last post!),  could participate in it too.

Just to get the ball of reasoned discussion rolling, I’m going to sketch out two competing answers to Fearless Dave’s question: the “Tribal Science Epistemologies Thesis” (TSET) and the “Polluted Scicomm Environment Thesis” (PSET). The answers aren't "complete" even on their own terms, but they convey the basics of the positions they stand for and give you a sense of the attitudes behind them too.

TSET. People are by nature factional. They use in-group/out-group distinctions to organize all manner of social experience—familial, residential, educational, occupational, political, recreational (“f***ing Bucky Dent!”).  The ubiquity of this impulse implies the reproductive advantage it must have conferred in our formative prehistory. Its permanence is testified to by the unbroken narrative of violent sectarianism our recorded history comprises.

The mechanisms of cultural cognition reflect our tribal heritage. The apprehension of danger in behavior that deviates from a group’s norms fortifies a group’s cohesion. Imputing danger to behavior characteristic of a competing group’s norms helps to stigmatize that group’s members and thus lower their status.  Cultural cognition thus reliably converts the fears and anxieties of a group’s members into the energy that fuels that group’s drive to dominate its rivals.

In a democratic political order, these dynamics will predictably generate cultural polarization. Opposing positions on societal risks (climate change, gun ownership, badger infestation)  supply conspicuous markers of group differentiation. Democratically enacted policies endorsing o rejecting those positions supply evocative gestures for remarking the relative status of the groups that hold them.

Nothing has really changed.  Nothing ever will. 

PSET. Cultural conflict over risk and related facts is not normal. It is a pathology peculiar to the pluralistic system of knowledge certification that characterizes a liberal democratic society. 

Individuals acquire their understanding of what is known to science primarily through their everyday interactions with others who share their basic outlooks. Those are the people they spend most of their time with, and the ones whose professions of expertise they can most reliably evaluate. Because all self-sustaining cultural groups  include highly informed members and intact processes for transmitting what they know, this admittedly insular process nevertheless tends to generate rapid societal convergence on the best available evidence.  

But not always. The sheer number of diverse groups that inhabit a pluralistic liberal society, combined with the tremendous volume of scientific knowledge such a society is distinctively suited to generating, makes occasional states of disagreement inevitable.

Even these rare instances of nonconvergence are likely to be fleeting.

But if by some combination of accident, misadventure, and strategic behavior, opposing perceptions of risk become entangled in antagonistic cultural meanings, dissensus is likely to endure and feed on itself. The material advantage any individual acquires by maintaining her standing within her cultural group tends to exceed the advantage of holding personal beliefs in line with the best evidence on societal risks. As a result, when people come to regard  positions on risk as badges of membership in one or another group, they will predictably use their reason to persist in beliefs that express their cultural identities.

This identity-protective variant of cultural cognition is the signature of a polluted science communication environment.  The entanglement of risks in antagonistic cultural meanings disables human reason and deprives the citizens of the Liberal Republic of Science of their political regime’s signature benefits: historically unprecedented civil tranquility and a stock of collective knowledge bountiful enough to secure their well-being from all manner of threat, natural and man-made.

But we can use our reason and our freedom to overcome this threat to our reason and our freedom.  Dispelling the toxin of antagonistic cultural meanings from our science communication environment is the aim of the science of science communication—a “new political science for a world itself quite new.”

So? Which is closer to the truth—TEST or PSET? 

What are the key points of disagreement between them? What might we already know that helps us to resolve these disagreements, and what sorts of evidence might we gather to become even more confident?

What are the alternatives to both TEST and PSET? Why might we think they are closer to the truth? How could we pursue that possiblity through observation, measurement, and inference?

And what does each of the candidate accounts of why “group affiliation” has such a profound impact on our perception of risk and like facts imply about the prospects for overcoming the barrier that cultural polarization poses to making effective use of scientific knowledge to promote our ends, individual and collective?

BTW, why do I say "closer to the truth" rather than "true"? Because obviously neither TEST nor PSET is true, nor is any other useful answer anyone will ever be able to give to Fearless Dave's question. The question isn't worth responding to unless the person asking means, "what's a good-enough model of what's going on--one that gives me more traction than the alternatives in explaining, predicting, and managing things?"

So ... what's the answer to Fearless Dave's question? Do TEST & PSET help to formulate one?

Thursday
Nov272014

Liberals trust in Supreme Court plummets! Less than 25% of them would agree to have Steve Breyer housesit for them when they want on vacation!e

Actually, I think all we can say is that neither liberals nor consersvatives hold the U.S. Supreme Court in as high regard as they both hold scientists.

But the Court shouldn't feel bad.  Nearly everyone is less respected than scienitstis.

 

 

 

Tuesday
Nov252014

Don't make free, reasoning people choose between learning posterior predictive model checking & *being who they are*!

Holy smokes--  former Freud expert & current stats legend Andrew Gelman is using the "disentanglement principle" to teach Bayesian statistics to frequentists! I'm not kidding!

For crying out loud, if he can pull that off, then surely science communicators can overcome cultural polarization on climate change.

 

Tuesday
Nov252014

"Conservatives lose faith in science over last 40 years"--where do you see *that* in the data? 

Note: Special bonus! Gordon Gauchat, the author of PSPS, wrote a reflective response that I've posted in a "followup" below.  I can't think or write as fast he does (in fact, I'm sort of freaked out by his speed & coherence), but after I think for a bit, I'll likely add something, too, since it is the case, as he says, that we "largely agree" & I think it might be useful for me to be even clearer about that, & also to engage some of the other really good interesting points he makes.

 This is a longish post, & I apologize for that to this blog’s 14 billion regular readers.  Honestly, I know you are all very busy.

To make it a little easier, I’m willing to start with a really compact summary.

But I’ll do that only if you promise to read the whole thing. Deal?

Okay, then.

This post examines Gordon Gauchat’s Politicization of Science in the Public Sphere, Am. Sociological Rev., 77, 167-187 (2012).

PSPS is widely cited to support the proposition that controversy over climate change reflects the “increasingly skeptical and distrustful” attitude of “conservative” members of the general public (Lewandowsky et al. 2013).

Is that supposed to be an elephant? Looks more like a snuffleupagus--everyone knows they don't believe in science (it's reciprocal)This contention merits empirical investigation, certainly.

But the data analyzed in PSPS, an admittedly interesting study!, don’t even remotely support it.

PSPS’s analysis rests entirely on variance in one response level for a single part of a multiple-part survey item.  The reported changes in the proportion of survey takers who selected that particular response level  for that particular part of the single item in question cannot be understood to measure “trust” in science generally or in any group of “scientists.”

Undeniably, indisputably cannot.

Actually—what am I saying? 

Sure, go ahead and treat nonselection of that particular response level to that one part of the single survey item analyzed in PSPS as evincing a “decline” in “trust of scientists” for “several decades among U.S. conservatives” (Hmielowski et al. 2013).

But if you do, then you will be obliged to conclude that a majority of those who identify themselves as “liberals” are deeply "skeptical" and “distrustful” of scientists too.  The whole nation, on this reading of the data featured in PSPS, would have to be regarded as having “lost faith” in science—indeed, as never having had any to begin with.

That would be absurd. 

It would be absurd because the very GSS survey item in question has consistently found—for decades—that members of the US general public are more “confident” in those who “run” the “scientific community” than they are in those who “run” “major companies,” the “education” system, “banks and financial institutions,” “organized religion,” the “Supreme Court,” and the “press.”

For the entire period under investigation, conservatives rated the “scientific community” second among the 13 major U.S. institutions respondents were instructed to evaluate.

If one accepts that it is valid to measure public "trust” in institutions by focusing so selectively on this portion of the data from the GSS "confidence in institutions" item, then we’d also have to conclude that conservatives were twice as likely to “distrust” those who “run . . . major companies” in the US as they were to “distrust” scientists .

That’s an absurd conclusion, too. 

PSPS’s analysis for sure adds to the stock of knowledge that scholars who study public attitudes toward science can usefully reflect on.

But the trend the study shows cannot plausibly be viewed as supporting inferences about the level of trust that anyone, much less conservatives, have in science.

That’s the summary.  Now keep your promise and continue reading.

A. Let’s get some things out of the way

Okay, first some introductory provisos

1. I think PSPS is a decent study.  The study notes a real trend & it’s interesting to try to figure out what is driving it.  In addition, PSPS is also by no means the only study by Gordon Gauchat that has taught me things and profitably guided the path of my own research.  Maybe he'll want to say something about how I'm addressing the data he presented (I'd be delighted if he posted a response here!).  But I suspect he cringes when he hears some of the extravagant claims that people make--the playground-like prattle people engage in--based on the interesting but very limited and tightly focused data he reported in PSPS.

2. There’s no question (in my mind at least) that various “conservative” politicians and conflict entrepreneurs have behaved despicably in misinforming the public about climate change. No question that they have adopted a stance that is contrary to the best available evidence, & have done so for well over a decade.

3. There are plenty of legitimate and interesting issues to examine relating to cognitive reasoning dispositions and characteristics such as political ideology, cutural outlooks, and religiosity. Lots of intriguing and important issues, too, about the connection between these indicators of identity and attitudes toward science.  Many scholars  (including Gauchat) and reflective commentators are reporting interesting data and making important arguments relating to these matters.  Nevertheless, I don’t think “who is more anti-science—liberals or conservatives” is an intrinsically interesting question—or even a coherent one.  There are many many more things I’d rather spend my time addressing.

But sadly, it is the case that many scholars and commentators and ordinary citizens insist there is a growing “anti-science” sensibility among a meaningful segment of the US population.  The “anti-science” chorus doesn’t confine itself to one score but “conservatives” and “religious” citizens are typically the population segments they characterize in this manner.

Advocates and commentators incessantly invoke this “anti-science” sentiment as the source of political conflict over climate change, among other issues.

Those who make this point also constantly invoke one or another “peer reviewed empirical study” as “proving” their position.

And one of the studies they point to is PSPS.

Because I think the anti-science trope is wrong; because I think it actually aggravates the real dynamics of cultural status competition that drive conflict over climate science and various other science-informed issues; because I think many reasonable people are nevertheless drawn to this account as a kind of a palliative for the frustration they feel over the persistence of cultural conflict over climate change; because I think empirical evidence shouldn’t be mischaracterized or treated as a kind of strategic adornment for arguments being advanced on other grounds; because I have absolutely no worries that another scholar would resent my engaging his or her work in the critical manner characteristic of the process of conjecture and refutation that advances scientific understanding; and because only a zealot or a moron would make the mistake of thinking that questioning what conclusions can appropriately be drawn from another scholar’s empirical research, criticizing counterproductive advocacy, or correcting widespread misimpressions is equivalent to “taking the side of” political actors who are misinforming the public on climate change, I’m going to explain why PSPS does not support claims like these:

 Have they actually read the study? click to see what they say ...

B. Have you actually read PSPS?

It only takes about 5 seconds of conversation to make it clear that 99% of the people who cite PSPS have never read it.

They don’t know it consists of an analysis of one response level to a single multi-part public opinion item contained in the General Social Survey, a public opinion survey that has been conducted repeatedly for over four decades (28 times between 1974 and 2012).

Despite how it is characterized by those citing PSPS, the item does not purport to measure “trust” in science. 

It is an awkwardly worded question, formulated by commercial pollsters in the 1960s, that is supposed to gauge “public confidence” in a diverse variety of (ill-defined, overlapping) institutions (Smith 2012):

I am going to name some institutions in this country. As far as the people running these institutions are concerned, would you say you have a great deal of confidence, only some confidence, or hardly any confidence at all in them?

a. Banks and Financial Institutions [added in 1975]

b. Major Companies

c. Organized Religion

d. Education

e. Executive Branch of the Federal Government

f. Organized Labor

g. Press

h. Medicine

i. TV

j. U.S. Supreme Court

k. Scientific Community

l. Congress

m. Military

For the period from 1974 to 2010, PSPS examines what proportion of respondents selected the response “a great deal of confidence” in those “running” the “Science community.”

 

As should be clear, the PSPS figure above plots changes only in the “great deal of confidence” response. 

I’m sure everyone knows how easy it is to make invalid inferences when one examines only a portion rather than all of the response data associated with a survey item

Thus, I’ve constructed Figures that make it possible to observe changes in all three levels of response for both liberals and conservatives over the relevant time period: 

As can be seen in these Figures, the proportion selecting “great deal” has held pretty constant at just under 50% for individuals who identified themselves as “liberals” of some degree (“slight,” “extreme,” or in between) on a seven-point ideology measure (one that was added to the GSS in 1974).

Among persons who described themselves as “conservatives” of some degree, the proportion declined from about 50% to just under 40%.  (In the 2012 GSS—the most recent edition—the figures for liberals and conservatives were 48% and 40%, respectively. I also plotted pcts for "great deal" in relation to the relevant GSS surveys "yesterday" in this post.)

The decline in the proportion of conservatives selecting “great deal” looks pretty continuous to the naked eye, but using a multi-level multivariate analysis (more on that below), PSPS reported finding that the decline was steeper after the election of Ronald Reagan in 1980 and George W. Bush in 2006.

That’s it.

Do you think that these data justify conclusions like "conservatives' trust in science has declined sharply," "conservatives have turned on science," "Republicans really don't like science," "conservatives have lost their faith in science," "fewer conservatives than ever believe in science," etc?  

If so, let me explain why you are wrong.

C.  Critically engaging the data

1. Is everyone anti-science?

To begin, why should we regard the “great deal of confidence” response level as the only one that evinces “trust”?

“Hardly any” confidence would seem distrustful, I agree.

But note that the proportion of survey respondents selecting “hardly any at all” held constant at under 10% over the entire period for both conservatives and liberals.

Imagine I said that I regarded that as inconsistent with the inference that either conservatives or liberals “distrust” scientists.

Could you argue against that?

Sure.

But if you did, you’d necessarily have to be saying that selecting “some confidence” evinces  “distrust” in scientists.

If you accept that, then you’ll have to conclude that a majority of “liberals” distrust scientists today,  too, and have for over 40 years.

For sure, that would be a conclusion worthy of headlines, blog posts, and repeated statements of deep concern among the supporters of enlightened self-government.

But such a reading of this item would also make the decision to characterize only conservatives as racked with “distrust” pathetically selective.

2.  Wow--conservative Republicans sure “distrust” business!

You’d also still be basing your conclusion on only a small portion of the data associated with the survey item.

Take a look, for example, at the responses for Major companies”: 

It’s not a surprise, to me at least, that conservatives have had more confidence than liberals in “major companies over the entire period.

I’m also not that surprised that even conservatives have less confidence in major companies today than they did before the financial meltdown.

But if you are of the view that any response level other than “a great deal of confidence” evinces “distrust,” then you’d have to conclude that 80% of conservatives today “distrust” our nation’s business leaders.

You’d also have to conclude that conservatives are twice as likely to trust those “running . . . the scientific community” as they are to trust those “running . . . major companies.”

I’d find those conclusions surprising, wouldn’t you?

But of course we should be willing to update our priors when shown valid evidence that contradicts them. 

The prior under examination here is that PSPS supports the claim that conservatives “don’t believe in science,” "have turned on science," “reject it," have "lost their faith in it," have been becoming "increasingly skeptical" of it "for decades,"  etc.

The absurdity of the conclusions that would follow from this reading of PSPS---that liberals and conservatives alike "really don't like science," that conservatives have so little trust in major companies that they'd no doubt vote to nationalize the healthcare industry, etc. -- is super strong evidence that it's unjustifiable to treat the single response level of the GSS "confidence" item featured in PSPS as a litmus test of anyone's "trust" in science.

3.  Everyone is pro-science according to the data presented in PSPS

What exactly do response to the GSS “confidence” item signify about how conservatives and liberals feel about those “running” the “Scientific community”?

Again, it’s always a mistake to draw inferences from a portion of the response to a multi-part survey item.  So let’s look at all of the data for the GSS confidence item.

The mean scores are plotted separately for “liberals” and “conservatives. The 13 institutions are listed in descending order as rated by conservatives-- i.e., from the institution in which conservatives expressed the greatest level of confidence to one in which they expressed the least in each period. 

The variance in selection of the "great deal" response level analyzed in PSPS is reflected in the growing difference between liberals' and conservatives' respective overall "confidence" scores for "the Scientific Community."

Various other things change, too.

But as can be seen, during every time period—including the ones in which Ronald Reagan and G.W. Bush were presidents—conservatives awarded “Science community” the second highest confidence score among the 13 rated institutions.  Before 1990, conservatives ranked the “science community” just a smidgen below “medicine”; since then, conservatives have vested more confidence in the “military.”

Conservatives rated the “science community” ahead of “major companies,” “organized religion,” “banks and financial institutions,” and “education,” not to mention “organized labor,” the “Executive Branch of the Federal Government” (during the Reagan and G.W. Bush administrations!), Congress, and “TV” throughout the entire period!

Basically the same story with liberals.  They rated the “science community” second behind “medicine” before 1990, and first in the periods thereafter.

So what inference can be drawn?

Certainly not that conservatives distrust science or any group of scientists.

Much more plausible is that conservatives, along with everyone else, hold science in extremely high regard.

That’s obvious, actually, given that the “Confidence” item sets up a beauty-contest by having respondents evaluate all 13 institutions.

click on me for thanksgiving treat! mmmmmm!But this reading—that conservatives, liberals, and everyone else has a high regard for science—also fits the results plainly indicated by a variety of other science-attitude items that appear in the GSS and in other studies.

It’s really really really not a good idea to draw a contentious/tendentious conclusion from one survey item (much less one response level to one part of a multi-part one) when that conclusion is contrary to the import of numerous other pertinent measures of public opinion.

4. Multivariate analysis

The analyses I’'ve offered are very simple summary ones based on “raw data” and group means.

There really is nothing to model statistically here, if we are trying to figure out whether these data could support claims like "conservatives have lost their faith in science" or  have become “increasingly skeptical and distrustful” toward it. If that were so, the raw data wouldn't look the way it does.

Nevertheless, PSPS contains a multivariate regression model that puts liberal-conservative ideology on the right-hand side with numerous other individual characteristics.  How does that cut?

As much as I admire the article, I'm not a fan of the style of model PSPS uses here.

E.g., what exactly are we supposed to learn from a parameter that reflects how much being a "conservative" rather than a "liberal" affects the probability of selecting the "great deal" response "controlling for" respondents' political party affiliation?

Overspecified regressions like these treat characteristics like being “Republican,” “conservative,” a regular church goer, white, male, etc. as if they were all independently operating modules that could be screwed together to create whatever sort of person one likes.

In fact, real people have identities associated with particular, recognizable collections of these characteristics.  Because we want to know how real people vary, the statistical model should be specified in a way that reflects differences in the combinations of characteristics that indicate these identities--something that can’t be validly done when the covariance of these characteristics is partialed out in a multivariate regression (Lieberson 1985; Berry & Feldman 1985).

But none of this changes anything.  The raw data tell the story. The misspecified model doesn’t tell a different one—it just generates a questionable estimate  of the difference in likelihood that a liberal as opposed to a  conservative will select “great deal” as the response on "Confidence" when assessing those who "run ... the Scientific Community” (although in fact PSPS reports a regression-model estimate of 10%--which is perfectly reasonable given that that's exactly what one observes in the raw data).

5. Someone should do a study on this!

There’s one last question worth considering, of course.

If I’m right that PSPS doesn’t support the conclusion that conservatives have “lost faith” in science, why do so many commentators keep insisting that that’s what the study says?  Don’t we need an explanation for that?

Yes. It is the same explanation we need for how a liberal democracy whose citizens are as dedicated to pluralism and science as ours are could be so plagued by unreasoning sectarian discourse about the enormous stock of knowledge at its disposal.

Refs

Berry, W.D. & Feldman, S. Multiple Regression in Practice (Sage Publications, Beverly Hills, 1985).

Gauchat, G. Politicization of Science in the Public Sphere, Am. Sociological Rev., 77, 167-187 (2012)

Hmielowski, J.D., Feldman, L., Myers, T.A., Leiserowitz, A. & Maibach, E. An attack on science? Media use, trust in scientists, and perceptions of global warming. Public Understanding of Science  (2013).

Lewandowsky, S., Gignac, G.E. & Oberauer, K. The role of conspiracist ideation and worldviews in predicting rejection of science. PloS one 8, e75637 (2013).

Lieberson, S. Making it count : the improvement of social research and theory (University of California Press, Berkeley, 1985).

Smith, T.W. Trends in Confidence in Institutions, 1973-2006. in Social Trends in American Life: Findings from the General Social Survey Since 1972 (ed. P.V. Marsden) (Princeton University Press, 2012).

Page 1 ... 4 5 6 7 8 ... 30 Next 20 Entries »