follow CCP

Recent blog entries
« A fragment: The concept of the science commmunication environment | Main | Knowledge is not scary; being *afraid to know* is »
Thursday
Oct102013

Mooney's revenge?! Is there "asymmetry" in Motivated Numeracy?

Just when I thought I finally had gotten the infernal "asymmetry thesis" (AT) out of my system once and for all, this hobgoblin of the science communication problem has re-emerged with all the subtlty and charm of a bad case of shingles.

AT, of course, refers to the claim that ideologically motivated reasoning (of which cultural cognition is one species or conception), is not "symmetric" across the ideological spectrum (or cultural spectra) but rather concentrated in individuals of a right-leaning or conservative (or in cultural cognition terms "hierarchical") disposition.

It is most conspicuously associated with the work of the accomplished political psychologist John Jost, who fnds support for it in the correlation between conservatism and various self-report measures of "dogmatic" thinking. It is also the animating theme of Chris Mooney's The Republican Brain, which presents an elegant and sophisticated synthesis of the social science evidence that supports it.

I don't buy AT. I've explained why 1,312 times in previous blogs, but basically AT doesn't cohere with the best theory for politically motivated reasoning and is not supported -- indeed, is at odds with -- the best evidence of how this dynamic operates.

The best theory treats politically motivated reasoning as a form of identity-protective cognition.

People have a big stake--emotionally and materially--in their standing in affinity groups consisting of individuals of like-minded goals and outlooks. When positions on risks or other policy relevant-facts become symbolically identified with membership in and loyalty to those groups, individuals can thus be expected to engage all manner of information--from empirical data to the credibility of advocates to brute sense impressions--in a manner that aligns their beliefs with the ones that predominate in their group.

The kinds of affinity groups that have this sort of significance in people's lives, however, are not confined to "political parties."  People will engage information in a manner that reflects a "myside" bias in connection with their status as students of a particular university and myriad other groups important to their identities.

Because these groups aren't either "liberal" or "conservative"--indeed, aren't particularly political at all--it would be odd if this dynamic would manifest itself in an ideologically skewed way in settings in which the relevant groups are ones defined in part by commitment to common political or cultural outlooks.

The proof offered for AT, moreover, is not convincing. Jost's evidence, for example, doesn't consist in motivated-reasoning experiments, any number of which (like the excellent ones of Jarret Crawford and his collaborators)  have reported findings that display ideological symmetry.

Rather, they are based on correlations between political outlooks and self-report measures of "open-mindedness," "dogmatism" & the like. 

These measures --ones that consist, literally, in people's willingness to agree or disagree with statements like "thinking is not my idea of fun" & "the notion of thinking abstractly is appealing to me"--are less predictive of the disposition to critically interrogate one's impressions based on available information than objective or performance-based measures like the Cognitive Reflection Test and Numeracy.  And thse performance-based measures don't meaningfully correlate with political outlooks.

In addition, while there is plenty of evidence that the disposition to engage in reflective, critical reasoning predicts resistance to a wide array of cognitive bias, there is no evidence that these dispositions predict less vulnerability to politically motivated reasoning.

On the contrary, there is mounting evidence that such dispositions magnify politically motivated reasoning. If the source of this dynamic is the stake people have in forming beliefs that are protective of their status in groups, then we might expect people who know more and and are more adept at making sense of complex evidence to use these capacities to promote the goal of forming identity-protective beliefs.

CCP studies showing that cultural polarization on climate change and other contested risk issues is greater among individuals who are higher in science comprehension, and that individuals who score higher on the Cognitive Reflection Test are more likely to construe evidence in an ideologically biased pattern, support this view.

The Motivated Numeracy experiment furnishes additoinal support for this hypothesis. In it, we instructed subjects to perform a reasoning task--covariance detection--that is known to be a highly discerning measure of the ability and disposition of individuals to draw valid causal inferences from data.

We found that when the problem was styled as one involving the results of an experimental test of the efficacy of a new skin-rash treatment, individuals who score highest in Numeracy-- a measure of the ability to engage in critical reasoning on matters involving quantitative information--were much more likely to corretly interpret that data than those who had low or modest Numeracy scores.

But when the problem was styled as one involving the results of gun control ban, those subjects highest in Numeracy did betteronly when the data presented supported the result ("decreases crime" or "increases crime") that prevails among persons with their political outlooks (liberal Democrats and conservative Republicans, respectively). When the data, properly construed, threatened to trap them in a conclusion at odds with their political outlooks, the high Numeracy people either succumbed to a tempting but lotically specious response to the problem or worked extra hard to pry open some ad hoc, confabulatory escape hatch.

As a result, higher Numeracy experiment subjects ended up even more polarized when considering the same data -- data that in fact objectively supported one position more strongly than the other -- than subjects who subjects who were less adept at making sense of empirical information.

But ... did this result show an ideological asymmetry?!

Lots of people have been telling me they see this in the results. Indeed, one place where they are likely to do so is in workshops (vettings of the paper, essentially, with scholars, students and other curious people), where someone will almost say, "Hey, wait! Aren't conservative Republicans displaying a greater 'motivated numeracy' effect than liberal Democrats? Isn't that contrary to what you said you found in x paper? Have you called Chris Mooney and admitted you were wrong?"

At this point, I feel like I'm talking to a roomful of people with my fly open whenver I present the paper!

In fact, I did ask Mooney what he thought -- as soon as we finished our working paper.  I could see how people might view the data as displaying an asymmetry and wondered what he'd say.

His response was "enh."

He saw the asymmetry, he said, but told me he didn't think it was all that interesting in relation to what the study suggested was the extent of the vulnerability of all the subjects, regardless of their political outlooks, to a substantial degradation in reasoning when confronted with data that disappointed their political predispositions--a point he then developed in an interesting Mother Jones commentary.

That's actually something I've said in the past, too--that even if there were an "asymmetry" in politically motivated reasoning, it's clear that the problem is more than big enough for everyone to be a serious practical concern.

Well, the balanced, reflective person that he is, Mooney is apparently able to move on, but I, in my typical OCD-fashion, can't...

Is the asymmetry really there? Do others see it? And how would they propose that we test what they think they see so that they can be confident their eyes are not deceiving them?

The location of the most plausible sighting--and the one where most people point it out--is in Figure 6, which presents a lowess plot of the raw data from the gun-control condition of the experiment:

What this shows, essentially, is that the proportion of the subjects (about 800 of them total) who correctly interpreted the data was a function of both Numeracy and political outlook. As Numeracy increases, the proportion of subjects selecting the correct answer increases dramatically but only when the correct answer is politically congenial ("decreases crime" for liberal Democrats, and "increases crime" for conservative Republicans; subjects' political outlooks here are determined based on the location of their score in relation to the mean on a continuous measure that combined "liberal-conservative" ideology & party identification).

But is there a difference in the pattern for liberal Democrats, on the on hand, and conservative Republicans, on the other?

Those who see the asymmetry tend to point to the solid black circle. There, in middling range of Numeracy, conservative Republicans display a difference in their likelihood of getting the correct answer based on which experiment condition ("crime increases" vs. "crime decreases"), but liberal Democrats don't.  

A ha! Conservative Republicans are displaying more motivated reasoning!

But consider the dashed circle to the right.  Now we can see that conservative Republicans are becoming slightly more likely to interpret the data correctly in their ideologically uncongenial condition ("crime decreases") -- whereas liberal Democrats aren't budging in theirs ("crime increases").  

A ha^2! Liberal Democrats are showing more motivated Numeracy--the disposition to use quantitative reasoning skills in an ideologically selective way!

Or we are just looking at noise.  The effects of an experimental treatment will inevitably be spread out unevenly across subjects exposed to it.  If we split the sample up into parts & scrutinize the effect separately in each, we are likely to be mistake random fluctuations in the effect for real differences in effect among the groups so specified.

For that reason, one fits to the entire dataset a statistical model that assumes the treatment has a particular effect--one that informed the experiment hypothesis.  If the model fits the real data well enough (as reflected in conventional standards like p < 0.05), then one can treat what one sees -- if it looks like what one expected -- as a corroboration of the study prediction.

Click me!!!We fit a multivariate regression model to the data that assumed the impact of politically motivated reasoning (reflected in the difference in likelihood of getting the answer correct conditional on its ideological congeniality) would increase as subjects' Numeracy increases. The model fit the data quite well, and thus, for us, corroborated the pattern we saw in Figure 6, which is one in which politically motivated reasoning and Numeracy interact in the manner hypothesized.

The significance of the model is hard to extract from the face of the regression table that reports it, but here is a graphical representation of what the model predicts we should see among subjects of different political outlooks and varying levels of Numeracy in the various experimental conditions:

The "peaks" of the density distributions are, essentially, the point estimates of the model, and the slopes of the curves (their relative surface area, really) a measure of the precision of those estimates.

The results display Motivated Numeracy: assignment to the "gun control" conditions creates political differences in the likelihood of getting the right answer relative to the assignment to the "skin treatment" conditions; and the size of those differences increases as Numeracy increases.

Now you might think you see asymmetry here too!  As was so for figure depicting the raw data, this Figure suggests that low Numeracy conservative Republicans' performance is more sensitive to the experimental assignment. But unlike the raw-data lowess plot, the plotted regression estimates suggest that the congeniality of the data had a bigger impact on the performance of higher Numeracy conservative Republicans, too!

But this is not a secure basis for inferring asymmetry in the data.  

As I indicated, the model that generated these predicted probabilities included parameters that corresponded to the prediction that political outlooks, Numeracy, and experimental condition would all interact in determining the probability of a correct response.  The form of the model assumed that the interaction of Numeracy and political outlooks would be uniform or symmetric.

The model did generate predictions in which the difference in the impact of politically motivated reasoning was different for conservative Republicans and liberal Democrats at low and high levels of Numeracy.

But that difference is attributable -- necessarily -- to other parameters in the model, including the point along the Numeracy scale at which the probability of the correct answer changes dramatically (the shape of the "sigmoid" function in a logit model), and the tendency of all subjects, controlling for ideology, to get the right answer more often in the "crime increases" condition.

I'm not saying that the data from the experiment don't support AT.  

I'm just saying that to support the inference that it does, one would have to specify a statistical model that reflected the hypothesized asymmetry and see whether it fits the data better than the one that we used, which assumes a uniform or symmetric effect.

I'm willing to fit such a model to the data and report the results.  But first, someone has to tell me what that model is!  That is, they have to say, in conceptual terms, what sort of asymmetry they "see" or "predict" in this experiment, and what sort of statistical model reflects that sort of pattern.

Then I'll apply it, and announce the answer! 

If it turns out there is asymmetry here, the pleasure of discovering the world is different from what I thought will more than offset any embarrassment associated with my previously having announced a strong conviction that AT is not right.

So-- have at it!  

To help you out, I've attached a slide show that sketches out seven distinct possible forms of asymmetry.  So pick one of those or if you think there is another, describe it.  Then tell me what sort of adjustment to the regression model we used in Table 1 would capture an asymmetry of that sort (if you want to say exactly how the model should be specified, great, but also fine to give me a conceptual account of what you think the model would have to do to capture the specified relationship between Numeracy, political outlooks, and the experimental conditions).

Of course, the winner(s) will get a great prize!  Winning, moreover, doesn't consist in confirming or refuting AT; it consists only in figuring out a way to examine this data that will deepen our insight.

In empirical inquiry, it's not whether your hypothesis is right or wrong that matters; it's how you extract a valid inference from observation that makes it possible to learn something.

Click on this -- and you too will go insane!

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (28)

Do you also have to wander around in 'model space' to see which models, realistic or not, best fit the data and determine whether the differences in fit are statistically significant? For instance the data might be fit equally well by both a 'realistic' model and an 'unrealistic' model. If that happens there would need to be some rational reason for choosing between the models. For instance, in polynomial fits to data, a person can always increase the power of the polynomial and get a better 'fit', but often this better statistical fit does not reflect any known processes and can be discarded, at minimum by Ockham's razor. In your example, I am not smart enough to know what a 'realistic' or 'unrealistic' model might be nor do I know the basis for assigning credibility scores to each model.

October 10, 2013 | Unregistered CommenterEric Fairfield

If the asymmetry thesis is correct -- and I don't really think it is but acknowledge it could be -- I'd expect to see something like graph 3: the same basic mechanism is at work in both groups (a very sharp incongruity seems implausible), just more so for one than the other. And the analysis you've already performed does evince motivated numeracy, so that would be the mechanism.

A first pass at thinking about how to test this (or, essentially, how to replicate graph 3 if it's derivable from the data):

It might make sense to code a variable for "culturally conducive crime data," where CCCD = 1 iff ( (ZConserv_Repub > 0 & crime_increases == 1) | (ZConserv_Repub < 0 & crime_decreases == 1)). Do you think that variable is reasonable or objectionable?
Also, for convenience, rash = 1 iff rash_increases == 1 | rash_decreases == 1

Then set up a regression that includes:
rash
Znumeracy (^2?)
Conserv_Repub
crime_increases
Znumeracy(^2) x rash
Znumeracy(^2) x Conserv_Repub x rash
Znumeracy(^2) x CCCD
Znumeracy(^2) x CCCD x Conserv_Repub

If I'm not missing anything (I probably am), the coefficient on the last interaction variable should suggest the difference in slope between the two lines in graph 3, where you choose appropriate red and blue values for Conserv_Repub.

Since all of the information there is already in your model in some form, I'm a little unclear on how this could possibly "fit[] the data better" than the existing model. Also, I think I may misunderstand what you mean by the original model assuming symmetry. The proposed model doesn't differ much from what you have there, but I wouldn't say it assumes symmetry, so perhaps I'm unclear on that.

October 10, 2013 | Unregistered CommenterMW

@Eric:

One could easily create a better fitting model just by doing something like fractional polynomial regression or some other "overfitting" technique. W/ enough tinkering, one could have model that basically plays "connect the dots" w/ the observations. The point here isn't to find the "best fitting" model; it's to specify a model that appropriately tests whether the data furnish *more* support for one hypothesis than a competing one.

If the effect of the experimental assignment in biasing the subjects' analysis of the results gets stronger as their outlooks move "left"or "right," then there should be a reasonably straightforward way to represent that mathematically. If that model explains significantly more of the experimental effect, then fine -- that would count in favor of asymmetry.

October 11, 2013 | Unregistered Commenterdmk38

@MW:

Conservrepub x [Experimental conditon] or Conservrepub x Numeracy x [Experimental condition] necessary models the impact of political outlooks as "linear" in "logit space." The "effect" is "symmetrical" -- the same -- whatever direction one is moving along the continuous variable Colnservrepub.

Because the model is a logit, there can be nonlinear effects in the probability of the outcome being "1" or "0" as the values of predictors move the outcome toward or away from the sigmoid portion of the curve. But that sort of effect won't be attributable to "conservatives being more closed minded" than liberals. It will be a a feature of the aspects of the model that are independent of Conservrepub.

I'll have to think about "CCCD."

October 11, 2013 | Unregistered Commenterdmk38

The most obvious alternative explanation is that the asymmetry is in the cultural baggage associated with the issue. Republicans and Democrats may not feel equally strongly about gun control, or not in the same ways. The influence on crime may not be an equally important justification for either side. There might be different levels of awareness on each side over what the right answer actually is. This stuff doesn't matter from the point of view of showing that there is an effect, but it's critical if you want to try to quantify it.

As I have said previously, the effect can potentially be explained not only by political motivation - the discomfort at a conclusion that conflicts with political beliefs - but also by differing prior knowledge/belief. As I said when you first presented these results, mathematics is hard and people are lazy. If people already know the answer (or don't trust their own ability to do mathematics) they will tend to pick the "obvious" answer. If the "obvious" answer looks wrong, either because they already know what the answer is, or because it would have some sort of crazy implication, then they'll examine their own thinking more closely. Some may simply substitute what they know to be the answer without thinking. Others may be motivated to do the calculation, or to take greater care over the calculation and logic. We can't tell which they're doing on the data presented.

So on gun control specifically, we know that preventing gun crime is the major motivator on the left, while constitutional rights, liberty, self-defence, the balance of power between state and citizen, and it simply being a fun and enjoyable hobby are all issues for the right. The cases are not symmetric. The left does not generally advocate gun control because they want to spoil the right-winger's fun, or because they think the citizen should be helpless before the state (although some do). For the left, gun crime is *the* argument, while for the right, it's only a matter of countering the left's argument. The left have heard the statistics proving that gun control is desperately needed to prevent crime. People on the right might not have. They might not care as much, seeing it as an acceptable risk. It's not inconceivable that this could lead to differences in response.

It seems to me like that would be impossible to disentangle. People both believe things are facts because of their political beliefs, and hold their political beliefs because of what they consider to be facts. And it's much more the latter than the former, or at least, so political partisans think. You don't believe gun control is good *because* you're a Democrat and they tell you to. You believe gun control is good because you want to prevent gun crime and think that would work, and *therefore* support the Democrats as the party most likely to enact your favoured policies. To test which came first, you would need to find issues where people didn't hold views on the *facts*, they only held the view because they knew their party did. Some of the more abstruse economics issues might fit the bill - a lot of people don't understand why, or even what a lot of the terms mean. But even so, that would be difficult to control.

As I said previously, I'd be more interested in a control experiment in which people were asked the same question on a topic where they already thought they knew the answer, but which was not politically controversial. Do people still tend to all pick the "obvious" answer when it conforms to their expectations, and only the most numerate switch to the unobvious but correct answer when it does not?
If so, the question of poltically-divided cognitive asymmetry would be bypassed. It becomes a question of politically-divided information asymmetry instead.

It might also be useful to show some error bars on those lowess plots. I know you can calculate standard errors for loess regressions in R, I assume other statistics packages can do the same. It could help with discussion of whether variations are just 'noise'.

October 11, 2013 | Unregistered CommenterNiV

@NiV:

if prior knowledge influences the answer, that is confirmation bias. In Bayesian terms, the covaraiance is part of the likelihood ratio; the experiment result is a basis for updating priors -- priors are not a basis for determining the outcome of the experiment. Subjects aren't asked about posterior odds either, so if they have strong priors, they needn't change their mind based a contrary piece of evidence. but failing to recognize *when* evidence is contrary to your priors is another matter entirely (and not a good one)

October 11, 2013 | Unregistered Commenterdmk38

@MW: Tell me how I could figure out from that model whether political outlooks interact w/ numeracy at all in gun conditions. Pretty sure you can't b/c there's no parameter that measures effect of numeracy when Conservrepub is at mean (0).

what is intuition behind CCCD in any event? Are you trying to split the sample along conservrepub? Coud just *do* that & do different models for subjects who are > 0 & those < 0 on conservrepub. but that splitting a continuous variable risks spurious findings of significant difference due to chance concentration of effect in relation to mean; if there is a curvilinear effect, then model the continous variable as curvilinear

there's got to be something that assesses whether conservrepub's interaction w/ either experimental treatment or numeracy or both is "linear"/"uniform" across values of the conservrepub or uneven/nonlinear.

October 11, 2013 | Registered CommenterDan Kahan

"So on gun control specifically, we know that preventing gun crime is the major motivator on the left, while constitutional rights, liberty, self-defence, the balance of power between state and citizen, and it simply being a fun and enjoyable hobby are all issues for the right. "

I can't speak for anyone on the left except myself, but I'd say that constitutional rights, liberty, self-defense, and the balance of power between state and citizen are all pretty important to me, even if the enjoyable hobby of shooting guns isn't. Just because I might have a different take on those issues than some folks on the right does not mean those issues are any less important to me.

October 12, 2013 | Unregistered CommenterJoshua

"Subjects aren't asked about posterior odds either, so if they have strong priors, they needn't change their mind based a contrary piece of evidence."

Possibly there is a problem with the wording of the paper, then. You say:

"Supplied that information once more in a 2x2 contingency table, subjects were instructed to indicate whether “cities that enacted a ban on carrying concealed handguns were more likely to have a decrease in crime” or instead “more likely to have an increase in crime than cities without bans.”"

Even assuming that like the skin cream case you preceded this with: "What result does the study support?" it's still a pretty ambiguous and subtle point. Saying "does support" implies the result is supported, which many will read as proved, which implies a posterior. It's made even more confusing by preceding that with "Please indicate whether the experiment shows that..." which is definitely talking about a posterior. I think that's a bit much to ask of the general public. Not even all scientists properly understand Bayesian reasoning and the evidence/posterior distinction.

However, you're missing my point. I'm not saying that the subjects were correctly applying Bayesian updating and reporting the posterior odds instead of the LR, I'm saying that if somebody was either lazy or innumerate, they would try to avoid calculating the LR at all, and would guess what the outcome was supposed to be by substituting their prior knowledge. They're cheating.

In the skin cream test they have no idea whether your skin cream really works, so they have no alternative but to calculate the LR, and only the most numerate can do so. (Or know that they're supposed to.) Even they struggle, given that the success rate is still only 75% even at the top of the range.

But in the gun control test they have an out, because they already know what the answer is supposed to be. If the "obvious" answer conforms to expectations, they skip the processing to check the LR entirely. Why work when you don't have to?

But if the "obvious" answer contradicts expectations, people experience conflict and confusion, and engage more processing power to resolve it. The innumerate stick with the "obvious" answer, even though it conflicts with their expectations and political beliefs - which is interesting in itself. Why don't they switch? But the numerate have another option - they can work out the LR as a supporting bit of evidence, or it may be that they have more familiarity with the ways statistics can mislead, and are guessing that it's a trick question. Interestingly, the conservatives score higher than they did on the skin cream test in the middle of the range, and the liberals peak higher than on the skin cream test at the end, with a 100% success rate instead of 75%.

On the skin cream test, these people were not able to do the calculation, but put politics into it and suddenly now they can?!

That's not impossible of course. It's been seen on the Wason selection task that people struggle with the logic when it is presented abstractly, but perform much better if the same logic problem is presented as a task to detect cheating. There may be some module that calculates LRs specially to support political beliefs.

But I expect it's just that the conflict activates more careful attention, and that people who are nearly there check for errors a little longer, or lean a little more in cases where they are unsure about the mathematics, or are more likely to make the effort. It might also, as I said, be that some simply substitute what they know to be the right answer without doing the calculation, which is how they score higher than their numeracy level would predict, but that wouldn't explain why the very low numeracy people don't do that.

But what I'm saying is their motivation for doing so might be because of the political discomfort the conclusion arouses, but it might also be because they think they already know the answer. They have prior knowledge. (And the two sides might differ in the prior knowledge they have.)

The easy way to test it is to repeat the question using a topic where people think they know the answer but don't actually care about it. Just pick something that most people would know but with no toxic cultural meanings. I would predict the same pattern as for gun control, you would presumably predict the same pattern as for skin cream. It's up to you whether you actually do so, of course. Maybe you think you already know the answer, so you don't feel you have to do the calculation? :-)

October 12, 2013 | Unregistered CommenterNiV

Dan,

I don't understand the purpose of this thread. I commented in one of your prior threads that the fictional skin cream treatment survey is flawed. You didn't respond. Perhaps you never read the comments? Here is a link to the thread.
http://www.culturalcognition.net/blog/2013/9/9/the-quality-of-the-science-communication-environment-and-the.html#comments

Using that survey as a comparision to anything doesn't seem appropriate until the flaws are removed and the survey redone. How do you construct a sturdy structure when the foundation is faulty?

October 12, 2013 | Unregistered CommenterBob Koss

Thanks for the link to that August post; it's helpful.

Ultimately, what I'm trying to do is assume (perhaps unjustifiably, but I think it's one reasonable interpretation of the asymmetry thesis) that motivated numeracy varies linearly with political outlook. In the CRT study, this would just consist in using a quadratic function of z_conservrepub (since the first derivative of the arithmetic difference between the curves would be linear).

I was trying to adapt that to this study, more complicated by virtue of its inclusion of numeracy, by creating a variable for the interaction between political outlook, numeracy, and cultural-consonance of the data in the gun condition. The derivative of that variable, with respect to z_conservrepub, would be the interaction between numeracy and cultural consonance in terms of predicting whether the person gets the data right. If that number is positive, it means that the more conservrepub a person gets, the more likely it is that a highly numerate person will get the culturally consonant question right. (Even if we decided to accept CCCD, we might also need CDCD, culturally dissonant crime data, to show that the more CR a person is, the more/less likely that a more numerate person will get the dissonant question right/wrong.) We'd determine whether those interaction variables significantly contribute to a prediction that a person will get the answer right. (Putting aside the validity of CCCD/CDCD, is that coherent?)

The "CCCD" variable, then, was intended to be an indicator variable for whether the subject was assigned to a culturally consonant gun data condition. But you're right that it's likely problematic to dichotomize the data this way. Not least, it doesn't make a whole lot of sense to say the data is culturally consonant or dissonant to someone with a zCR score of .03. (Off hand, loss of power seems more likely to me than spurious correlation, but neither's good.) So it might be better to just ditch the "culturally consonant" concept -- which was mostly in the service of visually replicating graph 3, not explaining the data -- and have an alternative variable that's the multiple of CR and the condition. Then we'd have a quadratic function by using the interaction variables Z_numeracy(^2) x [condition] x (Conserv_Repub^2), and we could see how well that explains the data.

Does that make some sense or have I overlooked something else? Sigh...Stubbs will make much better use of the synbio ipad than I would, anyway.

October 12, 2013 | Unregistered CommenterMW

@Bob: I didn't find the critique persuasive.

The covariance-detection problem has a long history in cognitive psychology & is a strong predictor of the likelihood to engage in flawed reasoning (a form of confirmatory hypotheis testing). The superior performance of high-numeracy subjects in the control condition of our study should enable you to see that it is validly measuring exactly that aptitude.

The reasons you gave for finding the information on the experiment results inconclusive is just fine, really; maybe such skepticism is a good idea.

But however you choose to think in such situations, the discrepancy between the probability that high numeracy subjects would get the "right" answer when it was either politically neulral or ideologically congenial and the probability that they would get the "right" answer when it was ideologically uncongenial remains evidence of the interaction of politically motivated reasoning and numeracy.

October 12, 2013 | Registered CommenterDan Kahan

@MW:

ONe possibility would be to keep the continuous measure but add a dummy variable for "identifies as a Republican" (or Democrat). There would have to be terms for the interaction of it w/ Numeracy & w/ the continuous political outlook predictor in each experimental condition. That wouldn't "split the data" w/ regard to a continuous variable; it would just measure weather the strength of that variable varies among people who self-identify politically in a particular way. It would also revewal whether "Independents" are free of motivated reasoning or motivated Numeracy. when pushed. A decisoin wold have to be made about whether to treat "independents" who "lean" one way or the other. Proably they should be treated as partisans since studies suggest that "leaners" have views consistent w/ the party they lean toward..

There would clearly be a power problem w/ this strategy, since it would be testing the impact of politicaly outlooks and numeracy coonditional on experimental assignment separately for 3 different groups of subjects.

That's why I think I am more inclined to fit a quadratic model. Adding terms for the effect of Conservtepub^2 creates a model that assumes the impact of political outlooks, either on their own or in conjunction w/ numeracy, is not uniform across values of that continuous predictor; if the slope "bends" & becomes steeper at one point along the right-left spectrum, then in theory it should show up in the relevant parameter estimates. (The quadratic model assumes a "curvilinear" rather than a "linear" effect, but this way of talking gets confusing b/c the logistic regression is itself a nonlinear model; but the logistic equation assesses the "uniform" effect of predictors in "logit" space -- and polynomial renderings of the regression equation assume that those same predictors have a *nonuniform* effect in "logit" space-- in a manner equivalent to a polynomial OLS regression. See DeMaris, A. (1992). Logit modeling :
> practical applications. Newbury Park: Sage Publications, pp. 49-51).

October 12, 2013 | Registered CommenterDan Kahan

Dan,

"But however you choose to think in such situations, the discrepancy between the probability that high numeracy subjects would get the "right" answer when it was either politically neulral or ideologically congenial and the probability that they would get the "right" answer when it was ideologically uncongenial remains evidence of the interaction of politically motivated reasoning and numeracy."

If I understand Bob's point correctly, it is that there are more differences between the cases than just political congeniality. Thus, you cannot tell if it is politics or one of these other factors causing the effect.

The issue Bob is pointing out is that in the first experiment you said that the reason for the different sample sizes was that subjects had dropped out of the experiment. I had missed that myself on first reading, but now that it has been pointed out, I think its implications are serious. It raises the obvious risk of a biased sample, if, as Bob suggests, people who get better are more likely to drop out than people who don't. (Or vice versa, of course, although that doesn't change the conclusion that way round.)

While the most obvious reason that there are different numbers of cities in the gun control study is that those were the cities that instituted gun control. There are potential biases there too - governments bring in gun control policies when gun crime is rising, not when it is falling - they are not as clear. It brings in a whole load of extra complication, depending on whether subjects noticed the explanation and what they thought of it. As Bob says, it could quite easily be that a lot of readers read it the same way he did, and concluded that if you counted the drop-outs as successes, the skin cream worked. They might not have done the same for the gun control study - the context is quite different.

There are arguments against this hypothesis, too, but I wouldn't dismiss the point so casually.

October 12, 2013 | Unregistered CommenterNiV

Ah, I'd forgotten that party affiliation wasn't a Likert scale. That would be a plausible way of working out the model without di/trichotomizing something continuous, but yes, it would have less power.

So I think I agree with you that quadratic model is preferable...but that does assume we think motivated numeracy increases linearly with partisanship (on one side), right? (Although lower strength of convictions will mute the effects for moderate people.) I could easily see someone who believes in the asymmetry thesis thinking that motivated numeracy bottoms out for moderate liberals, so maybe someone who really wants to defend the thesis can propose what he believes to be a realistic model. A higher order polynomial will replicate the data better, but I'd want someone to justify a quartic (or whatever) before you try it out.

(Only page 50 of the DeMaris book is available on Google Books preview, but I basically understand what you mean.)

October 13, 2013 | Unregistered CommenterMW

@MW:

party self-id amounts to a 7 point likert scale w/ "Independent" who won't lean at 4. But it would be possible to have 2 dummy variables -- one for Democrat (for those who are 'strong democrat,' 'democrat,' & 'ind lean democrat') & one for Repub ('strong repub,' 'repub,' & "ind lean repub'). So we are splitting sample into 3 groups, yes, and lose power. But at least we aren't estimating influence of conservrepub by splitting *it* -- which in addition to vitiating power also risks tricking us into thinking that noise is signal.

A "quadratic" model -- or in any case, one that adds terms for Conservrepub^2 -- assumes one kink somewhere in the slope of conservrepub. But it doesn't constrain the kink to be in any particular place or the slope of the curve on either side to be anything other than significantly different. So depending on what coefficient is for the relevant terms, we could end up w/ curve that shows that the asymmetry starts anywhere along the right-left spectrum & is either very dramatic or pretty mild etc. Could show left to be *more* motivated too, for that matter. (the effect wouldn't really be linear, though, on either side of the "kink"; slope will be changing in exponential manner on either side)

Now if you are positing someone who thinks there will be *more* than 1 kink -- if there are 2, then we need a "cubic" equation--then I start to get suspicious. Or at least worried about model overfitting. For sure, we could add lots of polynomial terms -- including fractional polynomials! -- and get a line w/ nearly as many kinks as we have observations. And it woudl be bull shit.

I *get* asymmetry & think it is perfectly plausible as a surmise & thus would be willing to view the results from a quadratic model (one kink) as giving me more reason to believe in it. But a cubic isn't plausible or trustworthy to me. If things are as you posit, then a quadratic model will still fit better then linear. If a quadratic doesn't work, then you are just trying too hard when you go to cubic.

On DeMaris-- just buy it! Or tell me when your birthday is.

Or try Pampel, F.C. Logistic Regression : A Primer. (Sage Publications, Thousand Oaks, Calif.; 2000), p. 20.

October 13, 2013 | Registered CommenterDan Kahan

NiV:

I'm sure that hgh numeracy partisans in the gun-control conditions reasoned exactly as you did -- but only when they were assigned to the condition in which they didn't like the answer.

The skin-condition separated out low & high numeracy subjects -- as one would expect. If Bob had been a subject, his answer would have been noise, assuming he is high numeracy

October 13, 2013 | Registered CommenterDan Kahan

DK:

Although a quadratic will have a "kink" -- a maximum or minimum -- I don't really think it fits a model where motivated reasoning has a maximum or minimum point along the variable ConservRepub. (If your reasoning is related to the logit transform and I'm missing that, just let me know.)

The outcome variable here is likelihood of getting the question right, which isn't itself a measure of motivated reasoning. Motivated reasoning -- and let me know if you think this is wrong -- is measured (assume high numeracy) by the interaction of ConservRepub and the condition (crime increase or crime decrease) in predicting likelihood of getting the question right. Using the example of the CRT experiment featured in the August post, and the graphs therein, here are a few ways this could play out:

(1) No motivated reasoning: if this is the case, the green and black lines are flat. The likelihood of finding CRT valid (or the difference between likelihood in the two conditions) is independent of z_conservrepub. Likelihood = constant. The derivative of a constant w/r/t z_conservrepub is zero, so motivated reasoning is zero.

(2) Constant motivated reasoning: if this is the case, the green and black lines have a constant slope. See the first graph in the August post. The likelihood of finding CRT valid (or the difference between likelihood in the two conditions) is linear with respect to z_conservrepub. Likelihood = a(z_conservrepub) + b. The derivative w/r/t to z_conservrepub is a constant. That's the constant motivated reasoning.

(3) Motivated reasoning that varies linearly with political outlook: if this is the case, the graph will be quadratic. It will look something like this. The likelihood of finding CRT valid (or the difference in the two condtions) is quadratic with respect to z_conservrepub. Likelihood = a(z_conservrepub)^2 + b(z_conservrepub) + c. The derivative w/r/t z_conservrepub is linear. That's the linearly-varying motivated reasoning.

(4) Motivated reasoning that varies with political outlook but has a minimum in moderates of a specific political leaning: if this is the case, the graph will (or at least could) be cubic, since the derivative of that cubic equation w/r/t z_conservrepub will be quadratic, and quadratics can have a single local maximum or minimum.

I'm thinking that (4) may be more plausible than (3). That is my concern with just going ahead and using the quadratic formula. If you think cubic motivated reasoning is baseless, then we should (thank goodness) not try to fit a quartic. You disagree?

October 13, 2013 | Unregistered CommenterMW

Dan,

How do you know?

October 14, 2013 | Unregistered CommenterNiV

Dan,

In the first paragraph of section 3 of your paper you say: "We undertook a study to test SCT and ICT". You also say: "... this design effectively pitted SCT and ICT against one another". It appears those are competing hypotheses you are trying to evaluate.

Evidently people have spent time and effort in presenting reasoned opinions for each hypothesis. If I was a researcher I would be embarrassed to present that skin cream study as part of an evaluation of which hypotheses above is better. I would consider it disrespectful to their work, so I'll make one more effort to convince you how poor that experiment really is.

In addition to what I said about part (A) of the skin cream experiment in the other thread, in part (B) the idea that 107 of 298 people who were snubbed for treatment upon their first visit would then voluntarily return in two week just to say they were cured, is to put it mildly, ludicrous. If you snub me I'm not likely to even give you the time of day without a very good reason. If you don't present figures having some sense of reality, how can you expect people to give a seriously answer. I can easily picture many people flipping a coin when answering both parts (A, B). Why else would 25% of the people in the top 90 percentile on your numeracy scale answer incorrectly?

You may also be using a defective numeracy scale. My formal education ended 50 years ago when I attended a vocational high school(alternate weeks shop and class) instead of a standard one. Only basic algebra, but no trig. or calculus or anything higher. I thank you for considering the possibility that I might be high numeracy, but if I am considered high numeracy this country is really in deep doodoo.

PS
I've noticed your blog takes a long time to load. I have a sort of mid-range 15 Mb connection and it usually takes more than 20 seconds to finish loading. It seems 25-30 mega-byte has been a typical download lately. It must be hell for people with slower connections or those having to deal with charges for exceeding monthly download limits.

A big reason is all those gawd awful bandwidth hogging bitmap(.bmp) graphics you use. Most websites stick to using graphics created in .png .jpg format which typically are only about 3-5% the size. Bitmap images can easily be converted to those more friendly sizes by loading them into almost any graphic program, even MS Paint, and simply saving in the more efficient format. You might even get a student to do the work for you. It really doesn't make much difference to me, as I have the time and no charges for exceeding download limits. It must discourage many people from visiting tho'.

October 14, 2013 | Unregistered CommenterBob Koss

@Bob:

thanks for graphics advice. I do find that a lot of graphics from stats programs come out horrible when saved at resolutions typical for .jpg. But you are right that there must be a better way.

I don't "use" students -- or anyone else -- to help w/ the website, though. The only thing I do w/ students is teach them things & give them the opportunity to collaborate in research if they are intersted. Besides, I like to figure things out.

October 15, 2013 | Registered CommenterDan Kahan

@MW:

Adding a "Conservrepub^2" counterpart for every term in Model 3 would pick up a "curvilinear effect in both motivated reasoning and motivated reasoning conditional on numeracy. Write it out -- I bet you'll be able to see which coefficients you'd need to look at to look for any of the 7 patterns identified in my slides.

As model 3 now is constructed, the coefficient for "Conservrepub" measures the influence that variance in political outlooks has in getting the "right answer" in the "crime decreases" condition when Znumeracy is at its mean ("0").

October 15, 2013 | Registered CommenterDan Kahan

.jpg is more efficient for photos, .gif and .png are more efficient for diagrams (where there are large blocks that are all the same colour).

October 15, 2013 | Unregistered CommenterNiV

@NiV:

Is there any sort of "metafile" format that works in .html? Anything that has be "pixelated" tends to do a bad job w/ cool figures.

October 15, 2013 | Registered CommenterDan Kahan

DK: I still don't quite see why you say a model that's quadratic in CR would pick up a curvilinear effect in motivated reasoning and motivated reasoning conditional on numeracy. Model 3 is linear in CR, and you agree that it only picks up constant ("uniform" or "symmetric") -- not linear -- motivated reasoning across political outlooks, right? So why would adding another CR term kick it up not just to linear but all the way to curvilinear?

Also, the seven patterns would (or at least could) all look essentially the same with either a quadratic or cubic model, no? You've chosen CR/LD values for the two lines on each, so the graphs don't show in any detail how motivated numeracy depends on political outlook (whether linearly or curvilinearly).

October 15, 2013 | Unregistered CommenterMW

"I don't buy AT. I've explained why 1,312 times ..."
Sounds like you cling to your belief with religious fervor. When you believe in absolute truth (which most conservatives identify with the Bible), you are not afraid to base your reasoning on it.
Michael Faraday, when asked, "What are your speculations about death?" responded: "Speculation! I am resting on certainties. 'For I know whom I have believed, and am persuaded that he is able to keep that which I have committed unto him against that day.'" When you deny that absolute truth exists, your reasoning renders itself pointless. Scientists who believe the Bible are still "roaming the earth" today. If you joined them, you'd be in excellent company:
Newton, Faraday, Maxwell, Kelvin (Physics)
Boyle, Dalton, Pascal, Ramsay (Chemistry)
Ray, Linnaeus, Mendel, Pasteur (Biology)
Steno, Woodward, Brewster, Agassiz (Geology)
Kepler, Galileo, Herschel, Maunder (Astronomy)
List from "What Is Creation Science?" by Henry M Morris and Gary E Parker.
I would add George Washington Carver to the Biology section.
P.S. You do not have to be a creationist to be born again, all it takes is letting Jesus come into your heart. He will not rape the human will. " ... And whosoever will, let him take the water of life freely." It's up to you.

October 17, 2013 | Unregistered CommenterMike Kamrath

Dan, here's an experiment that, unfortunately, won't be tried in the US very soon (well, at least not peacefully):

Separate those who believe in collectivism (incl. socialism, etc.) and those who believe in individualism into two groups, each with a section of the country.

Allow the separate groups to govern as they see fit, with one rule: Neither group can coerce the other in any way.

Let the experiment run a few years.

Check which group is more satisfied, wealthier, happier, more fulfilled, etc. Use any metric you like.

Do you suppose this might be the FIRST time collectivism has ever worked? Doubt that!


Consider how close we are already to a natural fit: Coasts vs. heartland, already mostly along collectivist vs. individualist lines.

Recalling that the only rule was "no coercion", consider that for the experiment to be successful for both groups, all collectivists would have to do is leave individualists alone.

My point: For collectivists, leaving anyone alone is the hard part.

Never mind the inevitable collectivist hell... lesson re-learned.

October 17, 2013 | Unregistered CommenterThoughtExperimenter

I think that perhaps all discussion of asymmetry or even traditional definitions of what it means to be liberal, moderate or conservative may need to be tossed out in favor of a redefinition of political divides:
http://www.npr.org/blogs/itsallpolitics/2013/11/01/242314511/top-pollster-sees-evidence-of-political-shock-wave 

November 1, 2013 | Unregistered CommenterGaythia Weis

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>