Even *more* Q & A on "cultural cognition scales" -- measuring "latent dispositions" & the Dake alternative
Here are a set of reflections in response to an email inquiry from a thoughtful person who wanted to understand what it means to treat the cultural worldview scales as “latent” measures of cultural dispositions, and why we—my collaborators & I in the Cultural Cognition Project—thought it necessary to come up with alternatives to the scales that Karl Dake initially formulated to test hypotheses relating to Douglas & Wildavsky’s “cultural theory of risk.” For elaboration, see Kahan, Dan M. "Cultural Cognition as a Conception of the Cultural Theory of Risk." Chap. 28 In Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk, edited by R. Hillerbrand, P. Sandin, S. Roeser and M. Peterson. 725-60: Springer London, Limited, 2012.
Question: What do you mean when you say the "cultural cognition worldview scales" measure a "latent variable"? And that they "work better" than Dake's scales in this regard?
(A) Let's hypothesize that there is inside each member of a group an unobserved & unobservable thing -- which we'll call that group's cultural predisposition -- that interacts with the mental faculties and processes by which that person processes information in a way that tends to bring his or her perceptions of risk into alignment with those of ever other member of the group. This would be an explanation (or part of one, at least) for "the science communication problem"-- the failure of valid, compelling, widely available scientific evidence to resolve political conflict over risks and other facts to which that evidence speaks.
(B) Although we can't observe cultural dispositions directly, we might still be able to make valid inferences about their existence & nature by identifying observable things that we would expect to correlate with them if the predispositions exist and if they have the nature that we might hypothesize they do. We had reason to believe that atoms existed long before they were "seen" under a scanning tunneling microscope because Einstein demonstrated that their existence would very precisely explain the observable (and until then very mysterious!) phenomenon of Brownian motion (in fact, we only "see" atoms with an ST microscope b/c we accept that the observable images they produce are best explained by atoms, which of course remain unobservable no matter what apparatus we use to "look" at them). Similarly, we might treat certain patterns of responses among a group's members as evidence that the predispositions exist and behave a certain way if such conclusions furnish a more likely explanation for those patterns than other potential causes and if we would not expect to see the patterns otherwise. Within psychology, this is known as a "latent variable" measurement strategy, in which "manifest" or observable "indicators"--here the patterns of responses -- are used to measure a posited "latent" or unobserved variable --"cultural predispositions" in our case.
(C) That's what the items in our cultural worlscales are -- indicators of the latent cultural predispositions that we hypothesize explain the science communication problem. The scales reflect a theory that people would not be expected to respond to the statements the items comprise in patterns that sort individuals out along two continuous, cross-cutting dimensions unless people had "inside" of them group predispositions that correspond to "hierarchy individualism," "hierarchy communitarianism," "egalitarian individualism," and "egalitarian communitarianism." On this view, responses are understood to be "caused" by the predispositions. The causal influence is only crudely understood and thus only imprecisely measured by each item; the whole point of having multiple ones is to aggregate responses to them, a process that will make the "noise" associated with their imprecision balance or cancel out & thus magnify the "signal" associated with them. The resulting scales can be viewed as "measuring" the intensity of the unobserved predispositions.
(D) For this strategy for "observing" or "measuring" cultural predispositions to be valid, various things must be true. The most basic one is that the items assigned to the scales must "perform" as the underlying theory posits. The responses to them must correlate with each other in ways that generate the pattern one would expect if they are indeed "measuring" the cultural predispositions. If the items correlate in some other pattern, the scales are not a 'valid" measure of the posited dispositions. If they correlate in the expected pattern, but the correlations are very weak, then the scales can be viewed as "unreliable," which refers to the degree of precision by which an instrument measures whatever quantity it is supposed to be measuring (imagine that your bathroom scale had some sort of defect and as a result gave readings that erratically over- or underestimated people's weight; it wouldn't be very reliable in that case).
(E) The Dake scales did not perform well. They were not reliable; they didn't correlate with *one another* as one would expect if the ones that were placed in the same scale were measuring the same thing. Moreover, to the extent that they seemed to measuring things "inside" people, those things did not fit expectations one would form about their relationship under the theory posited by the "cultural theory of risk."
(F) Once one has valid & reliable scales, one does not yet have evidence that cultural predispositions explain the science communication problem. Rather one has measures of what one is prepared to regard as cultural predispositions. At that point, one must devise studies geared to generating correlations between the predispositions, as measured by the valid and reliable scales, and risk perceptions, as measured in some appropriate way. Those correlations must be of a sort that one would expect to see if the predispositions are causing risk perceptions in the way one hypothesizes but would not expect to see otherwise.