CCP/Annenberg PPC Science of Science Communication Lab, Session 2: Measuring relative curiosity

During my stay here at APPC, we’ll be having weekly “science of science communication lab” meetings to discuss our ongoing research projects.  I’ve decided to post a highlight or two from each meeting.

We just had the 2nd, which means I’m one behind.  I’ll post the “session 1 highlight” “tomorrow.”

One of the major projects for the spring is “Study 2” in the CCP/APPC Evidence-based Science Filmmaking Initiative.  For this session, we hosted two of our key science filmmaker collaborators, Katie Carpenter & Laura Helft, who helped us reflect on the design of the study.

One thing that came up during the session was the distribution of “science curiosity” in the general population.

The development of a reliable and valid measure of science curiosity—the “Science Curiosity Scale” (SCS_1.0)—was one of the principal objectives of Study 1.  As discussed previously, SCS worked great, not only displaying very healthy psychometric properties but also predicting with an admirable degree of accuracy engagement with a clip from Your Inner Fish, ESFI collaborator Tangled Bank Studio’s award-winning film on evolution.

Indeed, one of the coolest findings was that individuals who were comparably high in science curiosity (as measured by SCS) were comparably engaged by the clip (as measured by view time, request for the full documentary, and other indicators) irrespective of whether they said they “believed in” evolution.

Evolution disbelievers who were high in science curiosity also reported finding the clip to be an accurate and convincing account of the origins of human color vision.

But it’s natural to wonder: how likely is someone who disbelieves in evolution to be high in science curiosity?

The report addresses the distribution of science curiosity among various population subgroups.  The information is presented in a graphic that displays the mean SCS scores for opposing subgroups (men and women, whites and nonwhites, etc).

Scores on SCS (computed using Item Response Theory) are standardized. That is, the scale has a mean of 0, and units are measured in standard deviations.

The graphic, then, shows that in no case was any subgroup’s mean SCS score higher or lower than 1/4 of a standard deviation from the sample mean on the scale. The Report suggested that this was a reason to treat the differences as so small as to lack any practical importance.

Indeed, the graphic display was consciously selected to help communicate that.  Had the Report merely characterized the scores of subgroups as “significantly different” from one another, it would have risked provoking the Pavlovian form of inferential illiteracy that consists in treating “statistically significant” as in itself supporting a meaningful inference about how the world works, a reaction that is very very hard to deter no matter how hard one tries!

By representing the scores of the opposing groups in relation to the scale’s standard-deviation units on the y-axis, it was hoped that reflective readers would discern that the differences among the groups were indeed far too small to get worked up over—that all the groups, including the one whose members were above average in science comprehension (as measured by the Ordinary Science Intelligence assessment), had science curiosity scores that differed only trivially from the population mean (“less than 1/4 of a standard deviationSEE???”).

But as came up at the session, this graphic is pretty lame.

Even most reflective people don’t have good intuitions about the practical import of differences in fractions of standards of a deviation.   Aside from being able to see that there’s not even a trace of difference between whites & nonwhites, readers can still see that there are differences in science curiosity levels & still wonder exactly what they mean in practical terms.

So what might work better?

Why—likelihood ratios, of course! Indeed, when Katy Barnhart from APPC spontaneously (and adamantly) insisted that this would be a superior way to graph this data, I was really jazzed!

I’ve written several posts in the last yr or so on how useful likelihood ratios are for characterizing the practical or inferential weight of data.  In the previous posts, I stressed that LRs, unlike “p-values,” convey information on how much more consistent the observed data is with one rather than another competing study hypothesis.

Here LRs can aid practical comprehension by telling us the relative probabilities of observing members of opposing groups at any particular level of SCS.

In the graphics below, the distribution of science curiosity within opposing groups is represented by probability density distributions derived from the means and standard deviations of the groups’ SCS scores.

As discussed in previous posts, study hypotheses can be represented this way: because any study is subject to measurement error, a study hypothesis can be converted into a probability density distribution of “predicted study outcomes” in which the “mean” is the predicted result and the standard error the one associated with the measurement precision of the study instrument.

If one does this, one can determine the “weight of the evidence” that a study furnishes for one hypothesis relative to another by comparing how likely the observed study result was under each of the the probability-density distributions of “predicted outcomes” associated with the competing hypotheses.

This value—which is simply the relative “heights” of the points on which the observed value falls on the opposing curves—is the logical equivalent of the Bayesian likelihood ratio, or the factor in proportion to which one should update one’s existing assessment of the probability of some hypothesis or proposition.

Here, we can do the same thing.  We know the mean and standard deviations for the SCS scores of opposing groups.  Accordingly, we can determine the relative likelihoods of members of opposing groups attaining any particular SCS score.

An SCS score that places a person at the 90th percentile is about 1.7x more likely if someone is “above average” in science comprehension (measured by the OSI assessment) than if someone is below average.

There is a 1.4x greater chance that a person will score at the 90th percentile if that person is male rather than female, and a 1.5x greater chance that the person will do so if he or she has political outlooks to the “left” of center rather than the “right” on a scale that aggreates responses to a 5-point liberal-conservative ideology item and a 7-point party-identification item.

There is a comparable relative probability (1.3x) that a person will score in the 90th percentile of SCS if he or she is below average rather than above average in religiosity (as measured by a composite scale that combines response to items on frequency of prayer, frequency of church attendance, and importance of religion in one’s life).

A 90th-percentile score is about 2x as likely to be achieved by an “evolution believer” than by an “evolution nonbeliever.”

Accordingly, if we started with two large, equally sized groups of believers and nonbelievers and it just so turned out that there were 100 total from the two groups who had SCS scores in the 90th percentile for the general population, then we’d expect 66 to be evolution believers and 33 of them to be nonbelievers (1 would a Pakistani Dr).

When I put things this way, it should be clear that knowing how much more likely any particular SCS score is for members of one group than members of another doesn’t tell us either how likely any group’s members are to attain that score or how likely a person with a particular score is to belong to a any group!

You can figure that out, though, with Bayes’s Theorem.

If I picked out a person at random from the general population, I’d put the odds at about 11:9 that he or she “believes in” evolution, since about 45% of the population answers “false” when responding to the survey item “Human beings, as we know them, evolved from another species of animal,” the evolution-belief item we used.

If you told me the person was in the 90th percentile of SCS, I’d then revise upward my estimate by a factor of 2, putting the odds that he or she believes in evolution at 22:9, or about 70%.

Or if I picked someone out a random from the population, I’d expect the odds to be 9:1 against that person scoring in the 90th percentile or higher. If I learned the individual was above average in science comprehension, I’d adjust my estimate of the odds upwards to 9:1.7 (about 16%); similarly, if learned the individual was below average in science comprehension, I’d adjust my estimate downwards to 15.3:1 (about 6%).

Actually, I’d do something slightly more complicated than this if I wanted to figure out whether the person was in the 90th percentile or above.  In that case, I’d in fact start by calculating not the relative probability of members of the two groups scoring in the 90th percentile but the relative probability of them scoring in the top 10% on SCS, and use that as my likelihood ratio, or the factor by which I update my prior of 9:1. But you get the idea — give it a try!

So, then, what to say?

I think this way of presenting the data does indeed give more guidance to a reflective person to gauge the relative frequency of science curious individuals across different groups than does simply reporting the mean SCS scores of the group members along with some measure of the precision of the estimated means—whether a “p-value” or a standard error or a 0.95 CI.

It also equips a reflective person to drawn his or her own inferences as to the practical import of such information.

I myself still think the differences in the science curiosity of members of the indicated groups, including those who do and don’t believe in evolution, is not particularly large and definitely not practically meaningful.

But actually, after looking at the data, I do feel that there’s a bigger disparity in science curiosity than there should be among citizens who do & don’t “believe in” evolution.  A bigger one than there should be among men & women too.  Those differences, even though small, make me anxious that there’s something in the environment–the science communication environment–that might well be stifling development of science curiosity across groups.

No one is obliged to experience the wonder and awe of what human beings have been able to learn through science!

But everyone in the Liberal Republic of Science deserves an equal chance to form and satisfy such a disposition in the free exercise of his or her reason.

Obliterating every obstacle that stands in the way of culturally diverse individuals achieving that good is the ultimate aim of the of the project of which ESFI is a part.

Leave a Comment