Had a great time yesterday at UCLA, where I was afforded the honor of being asked to do a lecture in the Jacob Marshack Interdisciplinary Colloquium on Mathematics and Behavioral Science. The audience asked lots of thoughtful questions. Plus I got the opportunity to learn lots of cool things (like how many atoms are in the Sun) from Susan Lohmann, Mark Kleiman, and others.
I believe they were filming and will upload a video of the event. If that happens, I'll post the link. For now, here's a summary (to best of my recollection) & slides.
1. The science communication problem & the cultural cognition thesis
I am going to offer a synthesis of a body of research findings generated over the course of a decade of collaborative research on public risk perceptions.
The motivation behind this research has been to understand the science communication problem. The “science communication problem” (as I use this phrase) refers to the failure of valid, compelling, widely available science to quiet public controversy over risk and other policy relevant facts to which it directly speaks. The climate change debate is a conspicuous example, but there are many others, including (historically) the conflict over nuclear power safety, the continuing debate over the risks of HPV vaccine, and the never-ending dispute over the efficacy of gun control.
In addition to being annoying (in particular, to scientists—who feel frustratingly ignored—but also to anyone who believes self-government and enlightened policymaking are compatible), the science communication problem is also quite peculiar. The factual questions involved are complex and technical, so maybe it should not surprise us that people disagree about them. But the beliefs about them are not randomly distributed. Rather they seem to come in familiar bundles (“earth not heating up . . . ‘concealed carry’ laws reduce crime”; “nuclear power dangerous . . . death penalty doesn’t deter murder”) that in turn are associated with the co-occurrence of various individual characteristics, including gender, race, region of residence and, ideology (but not really so much by income or education), that we identify with discrete cultural styles.
The research I will describe reflects the premise that making sense of these peculiar packages of types of people and sets of factual beliefs is the key to understanding—and solving—the science communication problem. The cultural cognition thesis posits that people’s group commitments are integral to the mental processes through which they apprehend risk.
2. A Model
A Bayesian model of information processing can be used heuristically to make sense of the distinctive features of any proposed cognitive mechanism. In the Bayesian model an individual exposed to new information revises the probability of her prior estimation of the probability of some proposition (expressed in odds) in proportion to the likelihood ratio associated with the new evidence (i.e., how much more consistent new evidence is with that proposition as opposed to some alternative).
A person experiences confirmation bias when she selectively searches out and credits new information conditional on its agreement with her existing beliefs. In effect, she is not updating her prior beliefs based on the weight of the new evidence; she is using her prior beliefs to determine what weight the new evidence should be assigned. Because of this endogeneity between priors and likelihood ratio, she will fail to correct a mistaken belief or fail to correct as quickly as she should despite the availability of evidence that conflicts with that belief.
The cultural cognition model posits that individuals have “cultural predispositions”—that is some tendency, shared with others who hold like group commitments, to find some risk claims more congenial than others. In relation to the Bayesian model, we can see cultural predispositions as the source of individuals’ priors. But cultural dispositions also shape information processing: people more readily search out (or are more likely to be exposed to) evidence congenial to their cultural predispositions than evidence noncongenial to them; they also selectively credit or discredit evidence conditional on its congeniality to their cultural predispositions.
Under this model, we will often see what looks like confirmation bias because the same thing that is causing individuals priors—cultural predispositions—is shaping their search for and evaluation of new evidence. But in fact, the correlation between priors and likelihood ration in this model is spurious.
The more consequential distinction between cultural cognition and confirmation bias is that with the latter people will not only be stubborn but disagreeable. People’s cultural predispositions are heterogeneous. As a result, people with different values with start with different priors, and thereafter engage in opposing forms of biased search for confirming evidence, and selectively credit and discredit evidence in opposing patterns reflective of their respective cultural commitments.
If this is how people behave, we will see the peculiar pattern of group conflict associated with the “science communication problem.”
3. Nanotechnology: culturally biased search & assimilation
CCP tested this model by studying the formation of nanotechnology risk perceptions. In the study, we found that individuals exposed to information on nanotechnology polarized relative to uninformed subjects along lines that reflected the environmental and technological risks associated with their cultural groups. We also found that the observed association between “familiarity” with nanotechnology and the perception that its benefits outweigh its risks was spurious: both the disposition to learn about nanotechnology before the study and the disposition to react favorably to information were caused by the (pro-technology) individualistic worldview.
This result fits the cultural cognition model. Cultural predispositions toward environmental and technological risks predicted how likely subjects of different outlooks were to search out information on a novel technology and the differential weight (the "likelihood ratio," in Bayesian terms) they'd give to information conditional on being exposed to it.
4. Climate change
a. In one study, CCP found that cultural cognition shapes perceptions of scientific consensus. Experiment subjects were more likely to recognize a university trained scientist as an “expert” whose views were entitled to weight—on climate change, nuclear power, and gun control—if the scientist was depicted as holding the position that was predominant in the subjects’ cultural group. In effect, subjects were selectively crediting or discrediting (or modifying the likelihood ratio assgined to) evidence of what “expert scientists” believe on this topics in a manner congenial to their cultural outlooks. If this is how they react in the real world to evidence of what scientists believe, we should expect them to be culturally polarized on what scientific consensus is. And they are, we found in an observational component of the study. These results also cast doubt on the claim that the science communication problem reflects the unwillingness of one group to abide by scientific consensus, as well as any suggestion that one group is better than another at perceive what scientific consensus is on polarized issues.
b. In another study, CCP found that science comprehension magnifies cultural polarization. This is contrary to the common view that conflict over climate change is a consequence of bounded rationality. The dynamics of cultural cognition operate across both heuristic-driven “System 1” processing, as well as reflective, “System 2” processing. (The result has also been corroborated experimentally.)
5. The “tragedy of the science communications commons”
The science communication problem can be understood to involve a conflict between two levels of rationality. Because their personal behavior as consumers or voters is of no material consequence, idividuals don’t increase their own exposure to harm or that of anyone else when they make a “mistake” about climate science or like forms of evidence on societal risks. But they do face significant reputational and like costs if they form a view at odds with the one that predominates in their group. Accordingly, it is rational at the individual level for individuals to attend to information in a manner that reinforces their connection to their group. This is collectively irrational, however, for if everyone forms his or her perception of risk in this way, democratic policymaking is less likely to converge on policies that reflect the best available evidence.
The solution to this “tragedy of the science communication commons” is to neutralize the conflict between the formation of accurate beliefs and group-congenial ones. Information must be conveyed in ways—or conditions otherwise created—that avoid putting people to a choice between recognizing what’s known and being who they are.
You will want me to show you how to do that, and on climate change. But I won’t. Not because I can’t (see these 50 slides flashed in 15 seconds). Rather, the reason is that I know that there’s no risk that you’ll fail to ask me what I have to say about “fixing the climate change debate” if I don’t address that topic now, and that if I do the risk is high you’ll neglect to ask another question that I think is very important: how is it that this sort of conflict between recognizing what’s known and being who one is happen in the first place?
Such a conflict is pathological. It’s bad. And it’s not the norm: the number of issues on which the entanglement of positions with group-congenial meanings could happen relative to the number on which they do is huge. If we could identify the influences that cause this pathological state, we likely could figure out how to avoid it, at least some of the time.
The HPV vaccine is a good illustration. The HPV vaccine generated tremendous controversy because it became entangled in divisive meanings relating to gender roles and parental sovereignty versus collective mandates of medical treatment for children. But there was nothing necessary about this entanglement; the HBV vaccine is likewise aimed at a sexually transmitted disease, was placed on the universal childhood-vaccination schedule by the CDC, and now has coverage rates of 90-plus percent year in & year out. Why did the HPV vaccine not travel this route?
The answer was the marketing strategy followed by Merck, the manufacturer of the HPV vaccine Gardasil. Merck did two things that made it highly likely the vaccine would become entangled in conflicting cultural meanings: first, it decided to seek fast-track approval of the vaccine for girls only (only females face an established “serious disease” risk—cervical cancer—from HPV); and second, it orchestrated a nationwide campaign to press for adoption of mandatory vaccine policies at the state level. This predictably provoked conservative religious opposition, which in turn provoked partisan denunciation.
Neither decision was necessary. If the company hadn’t pressed for fast-track consideration, the vaccine world have been approved for males and females within 3 years (it took longer to get approval for males because of the resulting controversy after approval of the female-only version). In addition, with state mandates, universal coverage could have been obtained through commercial and government-subsidized insurance. That outcome wouldn’t have been good for Merck, which wanted to lock up the US market before GlaxoSmithKline obtained approval for its HPV vaccine. But it would have been better for our society, because then instead of learning about the vaccine from squabbling partisans, they would have learned about it from their pediatricians, in the same way that they learn about the HBV vaccine.
The risk that Merck’s campaign would generate a political controversy that jeopardized acceptability of the vaccine was forecast in empirical studies. It was also foreseen by commentators as well as by many medical groups, which argued that mandatory vaccination policies were unnecessary.
The FDA and CDC ignored these concerns, not because they were “in Merck’s pocket” but because they were simply out of touch. They had not mechanism for assessing the impact that Merck’s strategy might have or for taking the risks this strategy was creating into account in determining whether, when, and under what circumstances to approve the vaccine.
This is a tragedy too. We have tremendous scientific intelligence at our disposal for promotion of the common welfare. But we put the value of it at risk because we have no national science-communication intelligence geared to warning us of, and steering us clear of, the influences that generate the disorienting fog of conflict that results when policy-relevant facts become entangled in antagonistic cultural meanings.
6. A “new political science”
Cultural cognition is not a bias; it is integral to rationality. Because individuals must inevitably accept as known by science many more things than they can comprehend, their well-being depends on their becoming reliably informed of what science knows. Cultural certification of what’s collectively known is what makes this possible.
In a pluralistic society, however, the sources of cultural certification are numerous and diverse. Normally they will converge; ways of life that fail to align their members with the best available evidence on how to live well will not persist. Nevertheless, accident and misadventure, compounded by strategic behavior, create the persistent risk of antagonistic meanings that impede such convergence—and thus the permanent risk that members of a pluralistic democratic society will fail to recognize the validity of scientific evidence essential to their common welfare.
This tension is built into the constitution of the Liberal Republic of Science. The logic of scientific discovery, Popper teaches us, depends on the open society. Yet the same conditions of liberal pluralism that energize scientific inquiry inevitably multiply the number of independent cultural certifiers that free people depend on to certify what is collectively known.
At the birth of modern democracy, Tocqueville famously called for a “new political science for a world itself quite new.”
The culturally diverse citizens of fully matured democracies face an unprecedented challenge, too, in the form of the science communication problem. To overcome it, they likewise are in need of a new political science—a science of science communication aimed at generating the knowledge they need to avoid the tragic conflict between converging on what is know by science and being who they are.
Gave a talk today at an event sponsored by the Democracy Fund. Topic was how to promote high-quality democratic deliberations in 2016.
Pretty sure the guy who would have been ideal for the talk was Brendan Nyhan. Maybe he wasn't available. But I did the best I could, which included advising them to be sure to consult Nyhan's work on the risk of perverse effects from aggressive "fact checking."
Outline of my remarks below (delivered in 10 mins! Barely time for one sentence; of course, even w/ 120 mins, I still wouldn't use more than one sentence). Slides here.
1. Overview: Cognition & reasoned public engagement
Promoting reasoned public engagement with issues of consequence requires more than supplying information. The public’s assessment of information is governed by cognitive dynamics that are independent of information availability and content. Indeed, such dynamics can produce perverse effects: e.g., polarization in response to accurate information, or intensification of mistaken belief in face of “fact checking” challenges. The anticipation of such effects, moreover, can create incentives for political campaigns to foster public engagement that isn’t connected to the best available evidence, or simply to ignore issues of tremendous consequence.
2. 2012: Two dynamics, two missing issues
a. Climate change was largely ignored in 2012 Presidential election b/c of “culturally toxic meanings.” When positions become symbols of group membership & loyalty, citizens resist engaging information that is hostile to their group, and draw negative inferences about the values and character of political candidates tho present it. It is thus safer for candidates in a close election to steer clear of the issue than to try to persuade. This explains Obama's and Romney's decisions to avoid climate: they couldn't have informed the public if they had and faced a much bigger risk of alienating voters they hoped they might otherwise appeal to.
b. Campaign finance, arguably the most important issue confronting US, was ignored, too, not because of toxic meaning but because of “affective poverty.” Public opinion reflects widespread support for all manner of campaign finance regulation. But the issue is inert; it doesn’t generate the images, stories, associations through which citizens apprehend matters of consequence for their lives. Thus, focusing on it would be a waste from candidates’ point of view.
3. 2016: Managing the cognitive climate
a. The influences that determine cognitive engagement can’t be ignored but also shouldn’t be treated as fixed or given. If a cognitive mechanism that frustrates engagement can be identified, responsive strategies can be formulated to try to counteract the operation of that mechanism. I’ll focus on the 2012 ignored issues as examples, but same orientation would be appropriate for any other issue.
b. Local political activity on adaptation is vibrant even in regions—e.g., Fla & Az-- in which climate change mitigation is taboo topic for political actors. Adaptation is free of the toxic meanings that surround climate change debate and indeed congenial to locally shared ones. Promoting constructive deliberations on adaptation has the potential to free the climate debate from meanings that block public engagement and scare politicians off. The cognitive climate would then be more hospitable for national engagement in 2016.
c. Between now & 2016, there is time to work on affective enrichment of campaign finance. Just as public health activists did with cigarettes, so activists can create and appropriately seed public discourse with culturally targeted narratives that infuse campaign finance with motivating resonances. This would create incentive of candidates to feature issue rather than ignore it in campaigns.
4. Proviso: Cognitive climate management must be evidence based.
The number of plausible strategies for positively managing the cognitive climate will always exceed the number that will actually work. Imaginative conjecture alone won’t reliably extract the latter from the sea of the former. For that, it’s necessary to use evidence-based strategies. Activists confronted with practical objectives and possessed of local knowledge should collaborate with social scientists to formulate hypotheses about strategies for managing the cognitive climate, and to use observation and measurement for fine tuning and assessing those strategies. And they should start now.
How common is it to notice & worry about the influence of cultural cognition on what one knows? If one is worried, what should one do?
Just yesterday, I successfully stopped myself from telling a person that their expressed belief has not a shred of evidence to support it (just in case, it wasn't a religious belief, that was something that could be demonstrated scientifically, but hasn't been). I stopped myself (pat on the head goes here) because, for one thing, I knew it would lead nowhere; and for another, I have my share of beliefs with a similar status of not being supported by scientific evidence (but not disproved by it either).
Just like anyone else beyond the age of five or ten, I have a worldview, my own particular blend of education, research, life experiences, internalized beliefs, etc. And by now, this worldview isn't easy to shake, let alone change. It doesn't mean that I disregard new scientific evidence, but it does mean that whenever I hear of new findings that seem to be in explicit contradiction with my worldview, I make a point of finding the source and reading it in some detail (going to a university library if need be). In 99 cases out 100 (at least), it turns out that I don't have to change my worldview after all: sometimes the apparent contradiction results from BBC-style popularization with a healthy doze of exaggeration or downright mistakes on a slow news day, sometimes the original research arrives at some almost statistically insignificant result based on far too small a sample, prettified it to make it publishable, or something else, or both.
But the dangerous thing is, if a reported finding does agree with my worldview, I usually don't go to such lengths to check the original source and the quality of research (with few exceptions). There is, of course, a certain degree of confirmation bias at work here, but my time on this earth is limited and I cannot spend it all in checking and re-checking what is already part of my worldview. What I do try to avoid in such cases is the very tempting assumption that now, finally, this particular belief is a knowledge based on scientific evidence (unless I really checked it at least with the same rigor as described above). I am afraid I am not always successful in this... are you?
Here are my questions (feel free to add & answer others):
1. What fraction of people are likely to be this self-reflective about how they know what they know?
2. Would things be better if in fact it were more common for people to reflect on the relationship between who they are & what know, on how this might lead them to error, and on how it might create conflict between people of different outlooks? If so, how might such reflection be promoted (say, through education, or forms of civic engagement)?
3. Okay: what is the answer to the question that Levin is posing (I understand her to be asking not merely whether others who use her strategy think they are successful with it but also whether that strategy is likely to be effective in general & whether there are others that might work better)? What should a person who knows about this do to adjust to the likely tendency to engage in biased search (& assimilation) consistent w/ worldview.
Another graphical model of the occasion for the Levin anxiety.
She called my attention to the study a few days ago & I'm just now getting a chance to think about it in a serious way. So far what's grabbing my attention the most is the scatterplot of "preferred arguments," although I definitely have a range of thoughts & questions that I plan to relay to Jen. I'm sure she'd like to know what others think too. Plus check out her site & learn about her really interesting general project.
I've reflected a bit more on this (& this). I've pinpointed the source of my frustration: the conflation of the "anti-vaccine movement" with a "growing crisis of public confidence,” a “growing wave of public resentment and fear,” an “epidemic of fear" etc. that have pushed us to the “tipping point” at which “herd immunity breaks down” – or indeed, over it “causing epidemics” in whooping cough & other diseases because of the “low vaccination rate.”
The second is a phantom. It also warrants being identified & analyzed. How do so many come to be so terrified of something that is genuinely terrifying but that doesn't truly exist? Psychological dynamics are involved, certainly, but I suspect manipulative forms of self-promotion -- ones that reflect a betrayal of craft -- are also at work.
Whatever its cause, though, the propagation of the assertion that there is a "growing crisis of public confidence" in vaccines -- a claim frequently bundled with the empirically unsupported proposition that science is "losing authority" in our society -- deserves being opposed too. Our science communication environment should not be polluted with misrepresentation. Fear should not dilute the currency of reason in public discussion. The Liberal Republic of Science shouldn't tolerate partisan resort to "anti-science" red-scare tactics (on left or right).
The moral force of these principles doesn't depend on proof of the bad consequences that disregarding them produces. But violating them does predictably generate very bad consequences, including the disablement of our capacity to recognize and be guided by the best available scientific evidence in our personal and collective decisions.
Ironically our society, which possess more science intelligence than any in history, lacks an organized science-communication intelligence. But many, in many sectors of society, recognize this deficit and are taking effective steps to remedy it.
Science journalists are, of course, playing the leading role in this effort. We have always relied on them to make what's known by science known to those whose quality of life science can enhance. They will necessarily play a key role if our society can succeed in replacing the blundering, unreflective manner in which it now handles transmission of scientific knowledge with a set of scientifically informed practices and institutions consciously geared to performing this critical task.
So it would be ungrateful and ignorant to be angry at "the media" for being the medium of the "anti-vaccine = anti-science public" phantom. If we turn to science journalists for help in counteracting the propagation of this pernicious trope, it's not a call to "clean house." It's just a request to the thoughtful and public-spirited members of that profession to do exactly what we are relying on them to do and what they have already been doing in modeling for the rest of us what contributing to the public good of maintaining a clean science communication environment looks like.
Your grateful admirer,
1. Public fears of vaccines are vulnerable to exaggeration as a result of various influences, emotional, psychological, social, and political.
2. Fears of public fear of vaccines are vulnerable to exaggeration, too, as a result of comparable influences.
3. High-profile information campaigns aimed at combating public fear of vaccines are likely to arouse some level of that very type of fear. As Cass Sunstein has observed in summarizing the empirical literature on this effect, “discussions of low-probability risks tend to heighten public concern, even if those discussions consist largely of reassurance.”
4. Accordingly, an informed and properly motivated risk communicator would proceed deliberately and cautiously. In particular, because efforts to quiet public fears about vaccines will predictably create some level of exactly that fear, such a communicator will not engage in a high-profile, sustained campaign to “reassure” the general public that vaccines are safe without reason to believe that there is a meaningful level of concern about vaccine risks in the public generally.
5. Not all risk communicators will be informed and properly motivated. Some communicators are likely to be uninformed, either of the facts about the level of public fear or of the general dynamics of public risk perception, including the potentially perverse effects of trying to “reassure” the public. Others will not be properly motivated: they will respond to incentives (e.g., to gain attention and approval; to profit from fears of people who understandably fear there will be a decline in public vaccination rates) to exaggerate the level of public fear of vaccines.
6. Accordingly, it makes sense to be alert both to potential sources of misinformation about vaccine risk and to potential sources of misinformation about the level of public perceptions of the risk of vaccines. Being alert, at a minimum, consists in insisting that those who are making significant contributions to public discussion are being strictly factual about both sorts of risks.
Here's a point that occurs to me in reflecting on some of the throughtful responses to yesterday's post.
I have been conflating "movement" with "public opinion." When I see "growing anti-vaccine movement," I am reading this to suggest that there is a sizable segment of the general population -- as opposed to a social outlier fringe, akin to people who believe "contrails" are part of a secret government operation to spray toxins on vulnerable populations or manipulate the weather.
Again, I know there are indeed such people (from reading Mnookin's excellent book, e.g.). I agree they are a menace. Perhaps they are growing in number, too, but I don't think they are any less of a concern if their size is merely holding constant.
The reason I equate “growing movement” with “growing public opinion,” though, is that the media and other reports that I’m addressing do.
They write of a "growing crisis of public confidence,” a “growing wave of public resentment and fear,” an “epidemic of fear” etc. that have “result[ed] in outbreaks as herd immunity breaks down,” etc. This suggests a much more fundamental state of anxiety, a much more widespread level of nonvaccination, and a much higher resulting incidence of childhood disease than actually exists in the US today. If I'm right to belive that, then this strikes me as an ill-considered “risk communication” strategy.
[Latest post -- I hope last on this topic for time being -- reflecting how, w/ benefit of commentators' help & more reading & thinking, I now would sort out what strikes me as real & important from what strikes me as really unfortunate.]
What is the evidence that an "anti-vaccination movement" is "causing" epidemics of childhood diseases in US? ("HFC! CYPHIMU?" Episode No. 2)
note: go ahead & read this but if you do you have to read this.
This is the second episode of “Hi, fellow citizen! Can you please help increase my understanding?”--or“HFC! CYPHIMU?"--a spinoff of CCP’s wildly popular feature, “WSMD? JA!.” In "HFC! CYPHIMU?," readers competete against one another, or collectively against our common enemy entropy, to answer a question or set of related questions relating to a risk or policy-relevant fact that admits of scientific inquiry. The questions might be ones that simply occur to me or ones that any of the 9 billion regular subscribers to this blog are curious about. The best answer, as determined by “Lil Hal,”™ a friendly, artificially intelligent robot being groomed for participation in the Loebner Prize competition, will win a “Citizen of the Liberal Republic of Science/I ♥ Popper!” t-shirt!
I'm simply perplexed here. What's the evidence to support the claim that public resistance to childhood vaccination is connected to an increased incidence of any childhood disease? Where do I find it?
If one does a Google search, one can easily find scores of alarming new sreports about "growing" anti-vaccine "movement" and its responsibility for outbreaks of diseases such as whooping cough.
But it's really really hard to find a news story that presents the sort of evidence that a curious and reasonable person might be interested in seeing in support of this genuinely scary assertion.
Look, I'm 100% positive that there are vocal, ill-informed opponents of childhood vaccination. Seth Mnookin paints a vivid, disturbing picture of them in his great book The Panic Virus. These groups assert that childhood vaccinations cause autism, a thoroughly discredited claim that has been shown to have originated in flawed (likely fraudulent) research.
If the question is whether we shoud condemn such folks, the answer is clearly yes.
But if the question is whether we should conclude that "[t]he anti-vaccine movement [has] cause[d] the worst epidemic of whooping cough in 70 years," etc., then we need more than the spectacle of such know-nothings to answer it. For such a claim to be warranted, there must be empirical evidence of (a) declining childhood vaccination rates that are (b) tied to disease epidemics.
Actually, it's pretty easy to find evidence-- outside of media reports on the anti-vaccine movement-- that tends to suggests (a) is false. Consider this table from a recent (Sept. 2012) Center for Disease Control Morbidity and Mortality Weekly Report:
What it shows is DTaP vaccination rates for pertussis (whooping cough) holding steady at 95% for 3 or more doses and about 85% for 4 or more over the period from 2007-2011.
For MMR (mumps, measles, rubella), the rate hovers around 92% for the entire period.
The rate of "children receiv[ing] no vaccinations" remains constant at about 0.7% (i.e., less than 1%). (In between these rows of data are rates for various other vaccinations -- like the one for Hepitatis B -- which all seem to show the same pattern. See for yourself.)
As for (b), it's also not too hard to find public health studies concluding that the outbreak in whooping cough was not caused by declining vaccination rates. One, published recently in the New England Journal of Medicine, found that the incidence of whooping cough was actually slightly higher among children who had received a full schedule of five DTaP shots than those who hadn't, and that their immunity decreased every year after the fifth shot. That's not what you'd expect to see if the increased incidence of this illness was a consequence of nonvaccination.
"So what are the causes of today's high prevalence of pertussis?," asked a opinion commentary writer in NEJM.
First, the timing of the initial resurgence of reported cases suggests that the main reason for it was actually increased awareness. What with the media attention on vaccine safety in the 1970s and 1980s, the studies of DTaP vaccine in the 1980s, and the efficacy trials of the 1990s comparing DTP vaccines with DTaP vaccines, literally hundreds of articles about pertussis were published. Although this information largely escaped physicians who care for adults, some pediatricians, public health officials, and the public became more aware of pertussis, and reporting therefore improved.
Moreover, during the past decade, polymerase-chain-reaction (PCR) assays have begun to be used for diagnosis, and a major contributor to the difference in the reported sizes of the 2005 and 2010 epidemics in California may well have been the more widespread use of PCR in 2010. Indeed, when serologic tests that require only a single serum sample and use methods with good specificity become more routinely available, we will see a substantial increase in the diagnosis of cases in adults.
In addition, of particular concern at present is the fact that DTaP vaccines [a newer vaccine introduced in the late 1990s] are less potent than DTP vaccines.4 Five studies done in the 1990s showed that DTP vaccines have greater efficacy than DTaP vaccines. Recent data from California also suggest waning of vaccine-induced immunity after the fifth dose of DTaP vaccine.5 Certainly the major epidemics in 2005, in 2010, and now in 2012 suggest that failure of the DTaP vaccine is a matter of serious concern.
Finally, we should consider the potential contribution of genetic changes in circulating strains of B. pertussis.4 It is clear that genetic changes have occurred over time in three B. pertussis antigens — pertussis toxin, pertactin, and fimbriae. . . .
Nothing about declining vaccination rates. Nothing.
The writer concludes, very sensibly, that "better vaccines are something that industry, the Center for Biologics Evaluation and Research of the Food and Drug Administration, and pertussis experts should begin working on immediately."
He also admonishes that "we should maintain some historical perspective on the renewed occurrences of epidemic pertussis and the fact that our current DTaP vaccines are not as good as the previous DTP vaccines: although some U.S. states have noted an incidence similar to that in the 1940s and 1950s, today's national incidence is about one twenty-third of what it was during an epidemic year in the 1930s."
I should point out too that in research I've done, I've just not found any evidence that a meaningful proportion of the general public views childhood vaccination as risky, or that there is any meaningful cultural divisions on this point.
Indeed, such vaccinations are one of the most commonly cited grounds members of the U.S. general public give for their (remarkably) high regard for scientists.
So ... what to make of this?
Here are some questions:
1. Is there evidence I'm overlooking that suggests there really is a meaningful, measureable decline in vaccine rates in the U.S.? If so, please point it out, and I will certainly post it!
2. Is there evidence that nonvaccination (aside, say, from that in newly arrived immigrant groups) is genuinely responsible for any increase in any childhood disease? Ditto!
3. If not, why does the media keep making this claim? Why do so many people not ask to see some evidence?
4. If there isn't evidence for the sorts of reports I'm describing, is it constructive to make people believe that nonvaccination is playing a bigger role than it actually is in any outbreaks of childhood diseases? Might doing so actually reduce proper attention to the actual causes of such outbreaks, including ineffective vaccines? Might they stir up anxiety by actually inducing people to believe that more people are worried about the vaccines than really are?
Can you please help increase my understanding, fellow citizens?
Some helpful commentators have sent things in offline. I will consult the rules committee on whether they are eligible to win the t-shirt.
Just to sharpen a bit:
1. I don’t dispute—and don’t see how any reasonable person could—that there will be a correlation between the vaccine rate and incidences of various diseases. For that reason, we should promote vaccination through mandatory vaccination regimes and other appropriate public health policies and should condemn anyone who denies these things. Good arguments and good evidence for these propositions are easy to find. But they are not what I'm asking about.
2. I’m interested in the claim—made over & over & over in words to this effect—that there is a "growing anti-vaccine movement, typified in coastal California's whooping cough outbreaks"? First, what does it mean for there to be a “movement”? Second, what is the evidence that it is “growing”? And third, what is the evidence that growth in the movement is influencing the incidence of childhood diseases?
One commentator referred me to:
Omer, S.B., Enger, K.S., Moulton, L.H., Halsey, N.A., Stokley, S. & Salmon, D.A. Geographic Clustering of Nonmedical Exemptions to School Immunization Requirements and Associations With Geographic Clustering of Pertussis. American Journal of Epidemiology 168, 1389-1396 (2008).
This study finds that there is a greater incidence of whooping cough in census tracts with higher numbers of “exempters.” I'd put this study in category 1. Nothing in it suggests that a “growing anti-vaccine movement” is “causing” an epidemic in whopping cough. Indeed, it addresses a period from 1993 to 2004—well before the outbreaks in 2010 and 2012. I assume the NEJM commentary writer was aware of this study when he concluded that an ineffective vaccine booster (the source of the outbreak identified in the NEJM study) combined with new strains of pertussis was the source of the outbreaks in 2010 and 2012. But in any case, it doesn’t support the assertion that the “Anti-Vaccine Movement Causes the Worst Whooping Cough Epidemic in 70 Years,” and doesn't make the argument for presenting the case for universal vaccination in these terms.
Maybe some other set of data do.
I don't really know, but I'm sure the sort of character deficiency that overstatement indicates is even more serious if someone who indulges in it doesn't recognize or acknowledge having done so, feel regret about it, thank the friends who pointed it out, and resolve to try to avoid recurrence.
My post on the "false & tedious defective brain meme" contained some regrettable elements of overstatement.
Before grappling with them, I want to start by extracting from the post the points that I do want to stand by and that I'm quite willing to defend in engaged discussion with others. They are essentially two: (a) that "defective rationality" accounts of polarization over policy-relevant science are ill-supported; and (b) that the frenetic and repetitive prorogation of these accounts in wide-eyed, story-telling modes of presentation demeans serious public discussion and distracts thoughtful people from thoughtful engagement with this serious problem.
These are strong claims but I want to advance them strongly because I feel they are right and important, and because I believe that obliging people to confront them, to the extent that I can, will advance common understanding -- either by helping people to see why views they might hold should be abandoned or, if it turns out I'm wrong (I certainly accept that I might be), by fortifying the basis for confidence they can have in them once they've dealt with evidence that seems to suggest a very different explanation for the difficulty we face.
Here are the elements of the post that I now recognize to be in the nature of regrettable overstatement:
- The singularity and certitude with which I advanced my alternative explanation. In fact, I feel the position I articulated--one I & others have been engaged in elaborating theoretically and testing empirically for a sustained period--is the best one for the phenomenon I mean to address, viz., conflict over societal risks and related facts. But there are other reasonable and plausible hypotheses (ones that are also much more subtle than the "our brains make us stupid!" trope); also many open questions, the investigation of which can furnish evidence that warrants revising the degree of confidence a reasonable person can have that the position I advanced, and not these others, is correct. It is disrespectful of other researchers and thoughtful appraisers of research to carry on as if this were not so. The cast of mind I displayed also demeans the enterprise of empirical inquiry by evincing the vulgar attitude that science is about reaching "final" and "conclusive" answers to difficult questions. Intrinsic to science's way of knowing is recognition of the permanent provisionality of what is known; expressing oneself in a manner that obscures or denies this not only risks misleading people but is ugly. One can have and communicate conviction in favor of, and can passionately advocate action based on, one's beliefs without concealing that what one believes is necessarily based on one's best understanding of the currently available evidence.
- The thoughtless conflation of discrete and complex matters. I meant to be addressing something particular: polarization over risks and other policy-relevant facts in controversies like climate change, gun control, fiscal policy, etc. But I wrote in a manner that invited the interpretation that I was discussing something much more general. Motivated reasoning, biased assimilation, and the like are not confined to these matters; the dynamics involved in attitude polarization won't reduce to the single one I was giving. The carelessness generality with which I presented my views injected an air of grandiosity into them that is embarrassing as well as potentially misleading.
- Reckless imprecision in criticism. I framed my argument as a criticism of science journalists. Often science journalists do, I think, deliberately frame as evidence of defects in human rationality--and in particular, as inconvenient leftovers in the evolution of the "brain"-- findings of decision science that don't bear any such interpretation. I am quoted often in articles that squeeze themselves into this template even though I don't see "cultural cognition" that way at all, and have been careful to emphasize in my discussions with writers that polarization originating in cultural cognition reflects unusual, correctible conditions inimical to reason (by analogy: if you can't see after someone shines a bright light in your eyes, that doesn't mean your eye is "defective"; it means that the normally reliable faculty of sight is disabled by the flashing of intense bursts of light into your face). I should have noted, though, that many science journalists don't make this mistake. And even more important, many of those who do are only transmitting truly awful scholarship being performed by researchers (and scholarly synthesizers) who exploit the peculiar fascination with "brain" centered explanations. It's really a huge injustice to express dissatisfaction with science journalists (whose craft skills should in fact be mined for insights into how to improve science communication in multiple sectors of our society) for this regrettable spectacle, which continues notwithstanding high-profile exposure of the defects in methods still routinely used in many such studies. (Oh-- just to avoid compounding my problems: I don't mean to say that all neuro-science studies that feature fMRI use these bogus methods. Indeed, some of the coolest studies I've ever seen are based on fMRI used as a distinctively discerning measure in connection with inspired experimental designs. See, e.g., this.)