What can we learn from (a) studying public perceptions of the risks of technologies the public hasn't heard of & (b) studying studies that do that?
Tamar Willner has posted another very perceptive and provocative essay in reaction to the readings for Science of Science Communication 2.0, this time in relation to Session 8, on “emerging technologies.” I’ve posted the first portion of it, plus a link to her site for continuation.
She also posed a very interesting question in the comments about an experiment that CCP did on nanotechnology risk perceptions. I’ve posted my answer to her question below the excerpt from her own post.
1. Tamar Wilner on studying perceived risks of emerging technologies ...
2. Q&A on a CCP study of nanotechnology risk perceptions
[I]n your paper (Cultural Cognition of the Risks and Benefits of Nanotechnology) you say, “The ‘cultural cognition’ hypothesis holds that these same patterns [cultural polarization] are likely to emerge as members of the public come to learn more about nanotechnology.” But in your blog you repeatedly make the point that only a minority of public science topics end up getting polarized - that such polarization is “pathological” in its rarity. Why then did you hypothesize that such a pattern would be likely to emerge for nanotech?
I noticed that you start to address this later in the paper when you say, “At the same time, nothing in our study suggests that cultural polarization over nanotechnology is inevitable…” and point out that proper framing can help people to extract factual information. Does this indicate that the passages used in your study employed framing likely to encourage polarization? They seem to use pretty neutral language, to me. What about them makes them polarizing - and is it possible that some polarizing language is unavoidable? For example it seems like just talking about "risks of a new technology" tape into certain egalitarian/communitarian sensibilities, but since that's exactly what the topic of discussion is, I don't see how you would avoid it.
This is a great question. It raises some important general issues & also gives me a chance to say some things that how my own views of the phenomenon of cultural contestation over risk have evolved since performing the study.
The main motivation for the study, actually, was a position that we characterized as the “familiarity hypothesis”: that as people learned more about nanotechnology, their views were likely to be positive.
This was an inference from the a consistent survey finding that although only a small percentage of the public reports having heard of nanotechnology, those who say they have tend to express very favorable views about the ratio of benefits to risks that it is likely to involve.
That inference is specious: there is obviously something unusual about people who know about a technology 80% of the rest of the public is unfamiliar with; it reflects poor reasoning not to anticipate that whatever is causing them to become familiar with a novel technology might also dispose them to form a view that others who lack their interest in technology might fail to form when they eventually learn about a novel form of science.
Our hypotheses, largely corroborated by the study, was that those who were already familiar with nanotechnology (or actually, simply saying they were familiar; the surveys were using self-report measures) were likely people with a protechnology “individualist” cultural outlook, and that when individuals with anti-technology “egalitarian communitarian” ones were exposed to information on nanotechnology they would likely form more negative reactions.
But Tamar’s perceptive question is why did we expect people unfamiliar with a technology to react at all when exposed to such a small amount of info?
As she notes, only a small minority of potentially risky technologies excite polarization. People tend to overlook this fact b/c the they understandably fixate on those and ignore the vast majority of noncontroversial ones.
My answer, basically, is that I don’t think the research team really had a good grasp of that point at the time we did the study. I know I didn’t!
I think, actually, that I really did mistakenly believe that culturally infused and hence opposing reactions to putative risk sources was “the norm,” and that it was therefore likely our subjects would polarize in the way they did.
Looking back, I’d say the reason it was reasonable to expect subjects would polarize is that the study was putting them in the position of consciously evaluating risks and benefits.
On the vast majority of putative risk sources on which there isn’t any meaningful level of polarization—from pasteurization of milk to medical x-rays to cell phone radiation to high-power transmission lines etc.—people don’t consciously think anything; they just model their behavior on what they see other people like them doing; when they do so, it’s rare for them to observe signs that give them reason to think there is anything to worry about.
Perfectly sensible approach, in my view, given how much more information known to science it makes sense to use in our lives than we have time to make sense of on our own.
But as I said, the study subjects were being prompted to do conscious risk assessment. Apparently, in doing that, they reliably extracted from the balanced risk-benefit information culturally affective resonances that enabled them to assimilate this novel putative risk source—nanotechnology—to a class of risks, environmental ones, on which members of their group are in fact culturally polarized.
Being made to expect, in effect, that there would be an issue here, the subjects reliably anticipated too what position “people like them” culturally speaking would likely take.
This interpretation raises a second point on which my thinking has evolved: the external validity of public opinion studies of novel technologies.
This was (as Tamar’s excellent blog post on the readings as a whole discusses) a major theme of the readings. Basically, when pollsters ask people their views on technological risks about which members of the public have never heard and don’t have discussions about in their daily lives, they aren’t genuinely measuring a real-world phenomenon.
They are, in effect, modeling how people react to the strange experience of being asked questions about something they have not thought about. To pretend that one can draw inferences from that to what actual people in the world are truly thinking is flat-out bogus. Serious social science researchers know this is a mistake; news-maker and advocacy pollsters either don’t or don’t care.
One can of course try to anticipate how people—including ones with different cultural outlooks might react to an emerging technology when they do learn about it. Indeed, I think that is a very sensible thing to do; the failure to make the effort can result in disaster, as it did in the case of the HPV vaccine!
But to perform what amounts to a risk-perception forecasting study, one must use an experimental design that it is reasonable to think will induce in subjects the reaction that people in the real world will form when they learn about the technology—or could form depending on how they learn about it. That is what one is trying to model.
A simple survey question—like the one Pew asked respondents about GM foods in its recent public attitudes study—cannot plausibly be viewed as doing that. The real-world conditions in which people learn things about a new technology will be much richer—much more dense with cues relating to the occasions for discussing an issue, the setting in which the discussion is being had, and the identity and perceived motivations of the information sources—than are accounted for in a simple survey question.
I think it is possible to do forecasting studies that reasonable people can reasonably rely on. I think our HPV vaccine risk study, e.g., which tried to model how people would likely react depending on whether they learned about the vaccine in conditions that exposed them to cues of group-conflict or not was like that.
But I think it is super hard to do it.
Frankly, I now don’t think our nanotechnology experiment design was sufficiently rich with the sorts of contextual background to model the likely circumstances in which people would form nanotechnology risk perceptions!
The study helped to show that the “familiarly hypothesis,” as we styled it, was simplistic. It also supported the inference that it was possible people might assimilate nanotechnology to the sorts of technological-risk controversies that now polarize members of different groups.
But the stimulus was too thin to be viewed as modeling the conditions in which that was actually likely to happen..
We should be mindful of hindsight bias, of course, but the fact that nanotechnology has not provoked any sort of cultural divisions in what is now approach two decades of its use in commercial manufacturing helps show the limited strength of inferences on the likelihood of conflict that can be drawn from experiments like the one we did.
As Tamar notes, we were careful in our study to point out that the experimental result didn’t imply that conflict over nanotechnology was “inevitable” or necessarily even “likely.”
But I myself am very willing—eager even—to acknowledge that we viewed the design we used as more informative than it could have been expected to be about the likely career of nanotechnology.
I have acknowledged this before in fact.
In doing so, too, I pointed out that that doesn’t mean studies like the ones we and other researchers did on nanotechnology risk perceptions weren’t or aren’t generally useful. It just means that the value people can get from those studies depends on researchers and readers forming a valid understanding of what designs of that sort are modeling and what they are not.
In order for that to happen, moreover, that researchers must reflect on their own studies over time to see what the fit between them and experience tells them about what is involved in modeling real-world processes in a manner that is most supportive of real-world inferences.
Speaking for myself, at least, I acknowledge that, despite my best efforts, I cannot guarantee anyone I will always make the right assessment of the inferences that can be drawn from my studies. I can promise, though, that when I figure out that I didn’t, I’ll say so—not just to set the record straight but also to help enlarge understanding of the phenomena that it is in fact my goal to make sense of.
Of course, if a cultural conflagration over nanotechnology ignites in the future, I suppose I’ll have to acknowledge the “me” I was then then had a better grasp of things than the “me” I am now; I doubt that will happen—but life, thank goodness, is filled with surprises!