follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Mining even more insight from the Pew "public science knowledge/attitudes" data--but hoping for even better extraction equipment (fracking technology, maybe?) in 2016... | Main | Two theories of "values," "identity" & "politically motivated reasoning" »
Monday
Dec282015

Replicate "Climate-Science Communication Measurement Problem"? No sweat (despite hottest yr on record), thanks to Pew Research Center!

One of the great things about Pew Research Center is that it posts all (or nearly all!) the data from its public opinion studies.  That makes it possible for curious & reflective people to do their own analyses and augment the insight contained in Pew's own research reports. 

I've been playing around with the "public" portion of the "public vs. scientists" study, which was issued last January (Pew 2015). Actually Pew hasn't released the "scientist" (or more accurately, AAAS membership) portion of the data. I hope they do!

But one thing I thought it would be interesting to do for now would be to see if I could replicate the essential finding from "The Climate Science Communication Measurement Problem" (2015)

In that paper, I presented data suggesting, first, that neither "belief" in evolution nor "belief" in human-caused climate change were measures of general science literacy.  Rather both were better understood as measures of forms of "cultural identity" indicated, respectively, by items relating to religiosity and items relating to left-right political outlooks.

Second, and more importantly, I presented data suggesting hat there is no relationship between "belief" in human-caused climate change & climate science comprehension in particular. On the contrary, the higher individuals scored on a valid climate science comprehension measure (one specifically designed to avoid the confound between identity and knowledge that confounds most "climate science literacy" measures), the more polarized the respondents were on "belief" in AGW--which, again, is best understood as simply an indicator of "who one is," culturally speaking.

Well, it turns out one can see the same patterns, very clearly, in the Pew data.

Patterned on the NSF Indicators "basic facts" science literacy test (indeed, "lasers" is an NSF item), the Pew battery consists of six items:

As I've explained before, I'm not a huge fan of the "basic facts" approach to measuring public science comprehension. In my view, items like these aren't well-suited for measuring what a public science comprehension assessment ought to be measuring: a basic capacity to recognize and give proper effect to valid scientific evidence relevant to the things that ordinary people do in their ordinary lives as consumers, workforce members, and citizens.

One would expect a person with that capacity to have become familiar with certain basic scientific insights (earth goes round sun, etc.) certainly.  But certifying that she has stocked her "basic fact" inventory with any particular set of such propositions doesn't give us much reason to believe that she possesses the reasoning proficiencies & dispositions needed to augment her store of knowledge and to appropriately use what she learns in her everyday life.

For that, I believe, a public science comprehension battery needs at least a modest complement of scientific-thinking measures, ones that attest to a respondent's ability to tell the difference between valid and invalid forms of evidence and to draw sound inferences from the former.  The "Ordinary Science Intelligence" battery, used in the Measurement Problem paper, includes "cognitive reflection" and "numeracy"modules for this purpose.

Indeed, Pew has presented a research report on a more fulsome science comprehension battery that might be better in this regard, but it hasn't released the underlying data for that one.

Psychometric properties of Pew science literacy battery--click on it, c'mon!But anyway, the new items that Pew included in its battery are more current & subtle than the familiar Indicator items, & the six-member Pew group form a reasonably reliable (α = 0.67), one dimensional scale-- suggesting they are indeed measuring some sort of science-related apptitude.

But the fun stuff starts when one examines how the resulting Pew science literacy scale relates to items on evolution, climate change, political outlooks, and religiosity.

For evolution, Pew used it's two-part question, which first asks whether the respondent believes (1) "Humans and other living things have evolved over time" or (2) "Humans and other living things have existed in their present form since the beginning of time." 

Subjects who pick (1) then are asked whether (3) "Humans and other living things have evolved due to natural processes such as natural selection" or (4) "A supreme being guided the evolution of living things for the purpose of creating humans and other life in the form it exists today."

Basically, subjects who select (2) are "new earth creationists." Subjects who select (4) are generally regarded as believing in "theistic evolution."  Intelligent design isn't the only variant of "theistic evolution," but it is certainly one of the accounts that fit this account.

Only subjects who select (3)-- "humans and other living things have evolved due to natural processes such as natural selection" -- are the only ones furnishing the response that reflects science's account of the natural history of humans. 

So I created a variable, "evolution_c," that reflects this answer, which was in fact selected by only 35% of the subjects in Pew's U.S. general public sample.

On climate change, Pew assessed (using two items that tested for item order/structure effects that turned out not to matter) whether subjects believed (1) "the earth is getting warmer mostly because of natural patterns in the earth’s environment," (2) "the earth is getting warmer mostly because of human activity such as burning fossil fuels," or (3) "there is no solid evidence that the earth is getting warmer."

About 50% of the respondents selected (2).  I created a variable, gw_c, to reflect whether respondents selected that response or one of the other two.

For political orientations, I combined the subjects responses to a 5-point liberal-conservative ideology item and their responses to a 5-point partisan self-identification item (1 "Democrat"; 2 "Independent leans Democrat"; 3 "Independent"; 4 "Independent leans Republican"; and 5 "Republican").  The composite scale had modest reliability (α = 0.61).

For religiosity, I combined two items.  One was a standard Pew item on church attendance. The other was a dummy variable, "nonrelig," scored "1" for subjects who said they were either "atheists," "agnostics" or "nothing in particular" in response to a religious-denomination item (α = 0.66).

But the very first thing I did was toss all of these items -- the 6 "science literacy" ones, belief in evolution (evolution_c), belief in human-caused climate change (gw_c), ideology, partisan self-identification, church attendance, and nonreligiosity--into a factor analysis (one based on a polychoric covariance matrix, which is appropriate for mixed dichotomous and multi-response likert items).

Click for closer look-- if you dare....

Not surprisingly, the covariance structure was best accounted for by three latent factors: one for science literacy, one for political orientations, and one for religiosity.

But the most important result was that neither belief in evolution nor belief in human-caused climate change loaded on the "science literacy" factor.  Instead they loaded on the religiosity and right-left political orientation factors, respectively.

This analysis, which replicated results from a paper dedicated solely to examinging the properties of the Ordinary Science Intelligence test, supports the inference that belief in evolution and belief in climate Warning: Click only if psychologically prepared to see shocking cultural bias in "belief in evolution" as science literacy assessment item! change are not indicators of "science comprehension" but rather indicators of cultural identity, as manifested respectively by political outlooks and religiosity.

To test this inference further, I used "differential item function" or "DIF" analysis (Osterlind & Everson, 2009).

Based on item response theory, DIF examines whether a test item is "culturally biased"--not in an animus sense but a measurement one: the question is whether the responses to the item measure the "same" latent proficiency (here, science literacy) in diverse groups.  If it doesn't-- if there is a difference in the probability that members of the two groups who have equivalent science literacy scores will answer it "correctly"--then administering that question to members of both will result in a biased measurement of their respective levels of that proficiency.

In Measurement Problem, I used DIF analysis to show that belief in evolution is "biased" against individuals who are high in religioisity. 

Using the Pew data (regression models here), one can see the same bias:

The latter but not the former are likely to indicate acceptance of science's account of the natural history of humans as their science literacy scores increase. This isn't so for other items in the Pew science literacy battery (which here is scored used using an item response theory model; the mean is 0, and units are standard deviations). 

The obvious conclusion is that the evolution item isn't measuring the same thing in subjects who are relatively religious and nonreligious as are the other items in the Pew science literacy battery. 

In Measurement Problem, I also used DIF to show that belief in climate change is a biased (and hence invalid) measure of climate science literacy.  That analysis, though, assessed responses to a "belief in Warning: Graphic demonstration of cultural bias in standardized assessment item. Click only if 21 yrs or older or accompanied by responsible adult or medical professional.climate change" item (one identical to Pew's) in relation to scores on a general climate-science literacy assessment, the "Ordinary Climate Science Intelligence" (OCSI) assesssment.  Pew's scientist-AAAS study didn't have a climate-science literacy battery.

Its general science literacy battery, however, did have one climate-science item, a question of theirs that in fact I had included in OCSI: "What gas do most scientists believe causes temperatures in the atmosphere to rise? Is it Carbon dioxide, Hydrogen, Helium, or Radon?" (CO2).

Below are the DIF item profiles for CO2 and gw_c (regression models here). Regardless of their political outlooks, subjects become more likely to get CO2 correctly as their science literacy score increases--that makes perfect sense!

But as their science literacy score increases, individuals of diverse political outlooks don't converge on "belief in human caused climate change"; they become more polarized.  That question is measuring who the subjects are, not what they know about about climate science.

So there you go!

I probably will tinker a bit more with these data and will tell you if I find anything else of note.

But in the meantime, I recommend you do the same! The data are out there & free, thanks to Pew.  So reciprocate Pew's contribution to knowledge by analyzing them & reporting what you find out!

References

Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015).

Kahan, D.M. “Ordinary Science Intelligence”: A Science Comprehension Measure for Use in the Study of Risk Perception and Science Communication. Cultural Cognition Project Working Paper No. 112 (2014).

Osterlind, S. J., & Everson, H. T. (2009). Differential item functioning. Thousand Oaks, CA: Sage.

Pew Research Center (2015). Public and Scientists' Views on Science and Society.

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (4)

I'm puzzled by some of your graphs. In the example "Does nanotechnology deal with things that are small, large, cold, or hot?" there is an axis labelled "probability of correct response" which starts at zero for scientific literacy below -1.5. Is this real? I would have expected that given a choice of four options, a genuine illiterate picking at random would score 25%. Is this cumulative probability, or is it an artifact of the regression? Why do the error bars narrow towards the ends, when surely sample size of those categories at the extremes would be expected to be smaller? What happened to your determination to plot raw data along with any calculated trends?

On both the religiosity/evolution and warming/politics, both categories start in the same place at the left hand end of the graph. Why are the scientifically illiterate apparently unaffected by the conflict with their identity? What's your theory?

And by the way, the gas I think most climate scientists believe causes temperatures to rise is water vapour. It accounts for 60-90% of the greenhouse effect (depending on how you count it, since its effect is not additive) and the majority of the effect of rising CO2 is due to the water vapour feedback, in which rising temperature causes more water to evaporate which acts as a greenhouse gas causing temperature to rise. The models claim 70-90% or more of the expected warming is due to water vapour, and thus warming due to water vapour is the only reason AGW might be politically relevant. Or so it could be argued.

So how would your test respond to someone who said "none of the above"? :-)

And incidentally, what survey did you use to determine the 'correct' answer to "what most scientists believe..."? ;-)

December 29, 2015 | Unregistered CommenterNiV

@NiV--

1. You are right: I should have created some links to raw data, since it's always appropriate to wonder about such matters. Here you go.

2. As I'm sure we've discussed, a *good* standardized test question is one in which the probability of getting the wrong answer if one doesn't actually know the right one is *higher* than the probability of getting the correct answer by "guessing." In other words, those who don't know tend to think the *wrong* answer is correct. Standardized testing is all about designing questions that have that property & Item Response Theory all about measuring the relative difficulty of the questions based on estimating Pr(Knows the answer|gave correct answer) vs. Pr(Doesn't know|gave correct answer).

3. The shape of the Nano_c sigmoid is not an artifact of regression-- except to extent that all regressions smooth the actaul effect in the data by imposing a model on them (something that is nicely illustrated by comparing the non-model model of the lowess regression line imposed over the scatterplot).

Going to "0" at lowest value of predictor wouldn't be a characteristic of fitting a logistic model; such a model would have a constant for those at lowest level of predictor if those observations don't in fact have 0 as mean (as is case for various other itmes featured in the figures for this post).

Of the 33 individuals who scored 0 on science literacy in the sample, 0 got the nanotechnology qurestion correct--obviously! Obviously, too, those who got all 6 right got 100% correct on that question. Error bar will be very narrow at extremes *if* in fact there is zero variance at those levels (that won't be true for "bad" questions -- ones that don't have strong correlation w/ the latent disposition being measured; there, the "profile" will be flat & CIs will flare out at ends).

Somewhat more informatively, among individuals who got 1 science literacy question correct, the pct who got nano_c as their 1 correct answer was 5% (8 of 158).

For those who got 0-2 correct, 11% (37/329) got nano_c right.

When test score here is calculated using IRT -- weighting correct responses based on the relative difficulty & discrimination -- there are 5 dozen or so rather than 7 scoring levels.

On the IRT-scored version of the scale, the probability of getting Nano_c correct doesn't reach 25% until one gets to about the 30th percentile of estimated general population score.

You could say that below that level people weren't smart enough even to guess. But given how sharply the probability of getting the answer right goes up at that stage, I don't think there was much guessing going on.

Frankly, the question is *great* if one wants to distinguish reliably among people who are pretty low in science literacy. It is *easy*-- 67% of the subjects got it correct--but it has really great discrimination: if you got it right, you are very very likely to be above 40th percentile; if you got it wrong, you are very very very likely below.

No idea why it has these nice properties. I'm curious now to figure out what the distribution was on *wrong* answers--presumably it wasn't evenly distributed among the incorrect responses. One can build into an IRT model a parameter to discount for "guessing" that is based on how evenly distributed "wrong" answers are, in fact; that's the "3PL" model-- I used 2PL here.

4. But I can say that the Pew battery as a whole lacks almost any discrimination above mean in population; it is too easy. About 30% of the sample get all the answers right. I should actually graph the data in a way that makes *that* clearer; my graphing obscures the large negative skew-- definitely something I should have avoided. The scatterplot I just posted makes it clear, for sure.

5. As for individuals of opposing identities "starting at same place" (one above 0%) for evolution & climate change, my main answer would be: clueless people have a hard time figuring out not only what science knows but also what positions they should cultivate belief in in order to fit in well in their affinity groups. They suffer for that, I'm sure.

Secondary answer would be the measurement error in the "cultural identity" scales-- politifcal outlooks & religiosity as constructed are both very crude. They are getting enough of the latent (unobservable) disposition to show us that identity is at work but surely one could do better. The cultural worldview scales would capture much more variance than the left-right measures on climate, e.g., and would be especially good at ferreting out variance among individuals are "low" in political sophistication.

6. On "water vapour"-- you are just being tendentious. The question doesn't ask which gas traps most heat; it asks which of the listed ones is a greenhouse gas. The "which gas traps most heat -- CO2, water vapor, etc" is a nice *climate science literacy* question b/c those who *don't know* are much more likely than chance to select C02.

7. I'd send you an "I Popper" t-shirt for such a nice set of questions but I don't know your address (no one does). So here's your consolation prize-- a sneak preview of upcoming blog that continues interrogation of these data.

December 29, 2015 | Registered CommenterDan Kahan

" Here you go."

Thanks.

"As I'm sure we've discussed, a *good* standardized test question is one in which the probability of getting the wrong answer if one doesn't actually know the right one is *higher* than the probability of getting the correct answer by "guessing.""

Well, that depends on the reasons for their mistaken belief.

One obvious reason why people might be more likely to give a wrong answer than guessing would be that they do in fact 'know' the answer, but have been taught the wrong one. There are many common misconceptions and myths in scientific folklore, such as is presented to the general public. Somebody who reads a lot of popular science is far more likely to pick one of these up than someone with no interest in or knowledge of the subject.

Another obvious possibility is that a large proportion are misunderstanding the question - when questions are worded ambiguously or unclearly then some will interpret it one way and some another. Those who parse it differently to the question-setter get marked as 'wrong'. This can be correlated to the variable under study if it is a matter of vocabulary, or cultural assumptions.

In both cases, it is an indication that there is a potential issue with the question - it's not measuring people's general level of knowledge or literacy, it's measuring which groups have been taught particular myths or ways of interpreting English language.

And these in turn could be correlated to the political beliefs you're studying without being directly caused by them. For example, a question about the causes of stomach ulcers would likely leave a lot of older people saying 'stress' while younger people have since been taught the answer is 'bacteria', and age may be correlated to conservatism. There's a risk, if you don't know why the question response behaves as it does, of polluting the results with spurious correlations.

"Of the 33 individuals who scored 0 on science literacy in the sample, 0 got the nanotechnology qurestion correct--obviously!"

Are you saying you used the nanotechnology question both to measure scientific literacy and the response to it? Interesting methodology! It would certainly explain the result, though!

Thanks. Question answered.

" clueless people have a hard time figuring out not only what science knows but also what positions they should cultivate belief in in order to fit in well in their affinity groups."

I don't follow this. Surely the hypothesis is that people are conforming belief to identity because of the major negative consequences for them personally of stepping out of line. (Like the guy at the NRA convention who shouts "Let's ban guns!") How can they not be aware of them? Surely everyone would yell at them the moment they opened their mouths?

Surely what *defines* an affinity group is shared opinions?

And if they're genuinely clueless, surely they should be guessing? A 'less accurate than guessing' result means they *have* obtained definite opinions from somewhere - they're just the wrong ones.

If you ask a question like "How many Gods are there?" and offer options A) Zero, B) One, C) Between two and twenty (inclusive), and D) more than twenty, then I don't think the group systematically getting less than 25% right can rightly be described as "clueless". They've been given a big clue - it's just the wrong one. If I had asked the same question about four-digit Wieferich Primes, I'm pretty sure the "clueless" really would have got 25%.

"On "water vapour"-- you are just being tendentious."

Of course - but making a serious point.

It was intended as an illustration showing how certain questions can reflect culturally-correlated knowledge of scientific myths and interpretations. My point was that the belief that CO2 is the only greenhouse gas is a commonly-believed scientific myth, resulting from thousands of over-simplified pop-science explanations. The question implicitly makes this assumption (it asks "What gas..." not "Which of these gases..."). If I saw it in a scientific paper, I'd mark it wrong. The principal greenhouse gas is H2O. SF6, N2O, and CH4 are more potent. CO2 is only distinguished because it is an external driver, not purely a feedback, and that's not what the question asks.

There are a number of other ways to 'creatively misinterpret' the question. For example, people who hadn't heard the fuss about greenhouse gases might think that the sun is what causes temperatures in the atmosphere to rise (every morning, and every summer), and will remember that the sun is made of hydrogen and helium, with the hydrogen providing the energy.

Or people with a knowledge of radiation physics may be aware that radon is a radioactive gas emitted naturally by certain sorts of rocks, and will know that radioactive materials get hot. Obviously radon added to the atmosphere will cause it to warm slightly, as it decays.

Those with an even more detailed knowledge may be aware that it decays by alpha-emission, the fast alpha particles being what actually heats th atmosphere, and of course an alpha particle is the nucleus of a helium atom.

Someone with more scientific knowledge can find more ways to interpret the question to obtain a desired answer. I can construct plausible and scientifically correct lines of reasoning to support all four options offered, all of them at once, or none of them. It doesn't mean that I don't know or don't believe that CO2 is a greenhouse gas.

The question isn't testing scientific understanding or belief. It's testing whether somebody knows the socially 'approved' answer to the question based on its presentation in the relentless political media campaigns, and is willing to give it. You're supposed to automatically recognise talk about rising temperatures in the atmosphere as a reference to AGW (as opposed to any other atmospheric temperature rise - e.g. a sunny day) and remember that it was blamed on CO2, as well as the line "most scientists say...". It's like testing whether people recognise advertising slogans for common household products. (Q: "Where's the beef?" A: "Wendy's") It is all highly culturally specific.

"The question doesn't ask which gas traps most heat; it asks which of the listed ones is a greenhouse gas."

Nope. It asks which causes atmospheric temperature to rise. (Scientists say.)

That's exactly my point about culturally-determined misinterpretation - you read "causes rising atmospheric temperature" and you see the words "greenhouse gas". It's like some sort of amazing optical illusion! That's a cultural effect - not a measurement of your scientific understanding/belief or reasoning.

Brains are amazing, aren't they?

December 30, 2015 | Unregistered CommenterNiV

@NiV--

Yes, in Item Response Theory, the relative difficulty & discrimination of an item are determined by regressing correct answer to item on the score of the test (which is not itself computed by summing correct answers but rather by weighting the questions in relation to the probability that they'll be answered correctly conditional on a level of the trait being measured).

That doesn't guarantee, however, that any particular item will have mean probability of "0" for getting correct answer at the lowest level of measured latent science literacy level -- or a mean probability of 1.0 of getting the answer correct at the highest level. It's pretty unlikely that the mean will be 0 even at low levels of some reasoning proficiency for "easy" questions. Consider profiles for "Electrons" in the first inset; or CO2 in the Pew battery. For very hard questions, too, it isn't unusual for the mean to be less than 1.0 at the highest level. Consider "Conditional" in the inset.

Actually, here are the item profiles for all 6 (all 6 of the ones in the battery; the profiles for evolution_c & gw_c when then are included in the battery are reflected in the post). You can see they are all easy. Nano_c has highest discrimination; indeed, CO2 & lasers are pretty marginal (sigmoid too gradual to contribute much to estimating Pr(knolwedge level 1|correct) vs. Pr(knowledge level 2|correct) along the continuum of the latent trait).

It's true that there is an endogenous relationship between predictor and the outcome variable in this method, of course. But the point of it is to generate informatoin on the relative difficulty & discrimination of the items being used or being consdered for use in the test-- something that can be done just fine despite endogeneity.

Another thing one can do just fine is see whether getting the items right has the same connetion to the level of the trait among different groups. Just regress getting the answer right on group membership, the IRT test score & a cross-product interaction term. If the cross-product interaction term is "significant" (practically speaking as much as statistically), then obviously the response to the item doesn't have the same relationship to overall proficiency for the two groups. That's the tecnique I used (the one used in standardized test assessment construction) to identify cutural bias in the "evoution" & "climate change" items.

Most of the other concderns you mentioned can also be addressed with IRT. If the correct answer is unclear, then the analysis I described will result in tell-tale profile-- one in which response profile is flat, meaning that getting the correct answer isn't correlated with the skill being measured by the test.

You shoudl like this. If one validates items with IRT, then one doesn't just "take the word" of the test designer that the answer he or she believes is "correct" is a valid measure of the trait or proficiency in question. Those who demonstrate the proficiency vouch for the correctness of the respone (or at least the validity of the question).

You also worry about questions that hvae answers the testtakers know are "wrong" but will be scored as "correct."

This is an external validity issue. The test is still validly measuring *something* in that case -- namely, knowledge of what the *test designer* thinks is true. If the test designer's understanding of some subject is wrong, then obviously, the test is not measuring genuine knowledge of (or proficiency or apptitude or whathaveyou w/ regard to) that subject.

IRT can't help you there, it's true.

(I've done a post already on why a "good" question for a general public science comprehension test woudl almost certainly be a disaster for a subject-matter expert)

On the CO2 questgion. My shorthand "greenhouse gas" is of no significance! It's still the case that the question asked which of the identified gases causes the temperature of the atmosphere to rise-- not which gas causes temperatures to rise the most--so who cares about water vapor? & for all I know -- not much about this topic-- there is something gas one could add that woudl increase it more than water vapor, too; the question doesn't advert to greenhouse gases, as you noted--you were the one who focused on gases deemed to be responsible for "climate change"!

Nothing surprising in your doing that or in my responding in the same way, given that the only practical reason for answering the quesetion is climate change. But I don't think the question itself evinces any sort of political orientation etc

December 30, 2015 | Registered CommenterDan Kahan

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>