Key Insight
During my stay here at APPC, we’ll be having weekly “science of science communication lab” meetings to discuss our ongoing research projects. I’ve decided to post a highlight or two from each meeting. We just had the 2nd, which means I’m one behind. I’ll post the “session 1 highlight” “tomorrow.” One of the major projects for ... Read more
During my stay here at APPC, we’ll be having weekly “science of science communication lab” meetings to discuss our ongoing research projects. I’ve decided to post a highlight or two from each meeting.
We just had the 2nd, which means I’m one behind. I’ll post the “session 1 highlight” “tomorrow.”
One of the major projects for the spring is “Study 2” in the CCP/APPC Evidence-based Science Filmmaking Initiative . For this session, we hosted two of our key science filmmaker collaborators , Katie Carpenter & Laura Helft, who helped us reflect on the design of the study.
One thing that came up during the session was the distribution of “science curiosity” in the general population.
The development of a reliable and valid measure of science curiosity—the “Science Curiosity Scale” (SCS_1.0)—was one of the principal objectives of Study 1. As discussed previously, SCS worked great, not only displaying very healthy psychometric properties but also predicting with an admirable degree of accuracy engagement with a clip from Your Inner Fish , ESFI collaborator Tangled Bank Studio’s award-winning film on evolution.
Indeed, one of the coolest findings was that individuals who were comparably high in science curiosity (as measured by SCS) were comparably engaged by the clip (as measured by view time, request for the full documentary, and other indicators) irrespective of whether they said they “believed in” evolution .
Evolution disbelievers who were high in science curiosity also reported finding the clip to be an accurate and convincing account of the origins of human color vision.
But it’s natural to wonder: how likely is someone who disbelieves in evolution to be high in science curiosity?
The report addresses the distribution of science curiosity among various population subgroups. The information is presented in a graphic that displays the mean SCS scores for opposing subgroups (men and women, whites and nonwhites, etc).
Scores on SCS (computed using Item Response Theory) are standardized. That is, the scale has a mean of 0, and units are measured in standard deviations.
The graphic, then, shows that in no case was any subgroup’s mean SCS score higher or lower than 1/4 of a standard deviation from the sample mean on the scale. The Report suggested that this was a reason to treat the differences as so small as to lack any practical importance.
Indeed, the graphic display was consciously selected to help communicate that. Had the Report merely characterized the scores of subgroups as “significantly different” from one another, it would have risked provoking the Pavlovian form of inferential illiteracy that consists in treating “statistically significant” as in itself supporting a meaningful inference about how the world works, a reaction that is very very hard to deter no matter how hard one tries !
By representing the scores of the opposing groups in relation to the scale’s standard-deviation units on the y-axis, it was hoped that reflective readers would discern that the differences among the groups were indeed far too small to get worked up over—that all the groups, including the one whose members were above average in science comprehension (as measured by the Ordinary Science Intelligence assessment), had science curiosity scores that differed only trivially from the population mean (“ less than 1/4 of a standard deviation — SEE??? ”).
But as came up at the session, this graphic is pretty lame.
Even most reflective people don’t have good intuitions about the practical import of differences in fractions of standards of a deviation. Aside from being able to see that there’s not even a trace of difference between whites & nonwhites, readers can still see that there are differences in science curiosity levels & still wonder exactly what they mean in practical terms.
Why—likelihood ratios, of course! Indeed, when Katy Barnhart from APPC spontaneously (and adamantly) insisted that this would be a superior way to graph this data, I was really jazzed!
I’ve written several posts in the last yr or so on how useful likelihood ratios are for characterizing the practical or inferential weight of data. In the previous posts, I stressed that LRs, unlike “p-values,” convey information on how much more consistent the observed data is with one rather than another competing study hypothesis.
Here LRs can aid practical comprehension by telling us the relative probabilities of observing members of opposing groups at any particular level of SCS.
In the graphics below, the distribution of science curiosity within opposing groups is represented by probability density distributions derived from the means and standard deviations of the groups’ SCS scores.
As discussed in previous posts, study hypotheses can be represented this way: because any study is subject to measurement error, a study hypothesis can be converted into a probability density distribution of “predicted study outcomes” in which the “mean” is the predicted result and the standard error the one associated with the measurement precision of the study instrument.