follow CCP

Recent blog entries
« What would I advise climate science communicators? | Main | Intense battle for "I [heart] Popper/Citizen of the Liberal Republic of Science" t-shirt »
Monday
Jan282013

Measuring "Ordinary Science Intelligence" (Science of Science Communication Course, Session 2)

This semester I'm teaching a course entitled the Science of Science Communication. Posted general information on the course and will be posting the reading list at regular intervals. I will also post syntheses of the readings and the (provisional, as always) impressions I have formed based on them and on class discussion. This is this first such synthesis. I eagerly invite others to offer their own views, particularly if they are at variance with my own, and to call attention to additional sources that can inform understanding of the particular topic in question and of the scientific study of science communication in general. 

 In Session 2 (i.e., our 2nd class meeting) we started the topic of “science literacy and public attitudes.” We (more or less) got through “science literacy”; “Public attitudes” will be our focus in Session 3.

As I conceptualize it, this topic is in nature of foundation laying. The aim of the course is to form an understanding of the dynamics of science communication distinctive of a variety of discrete domains. In every one of them, however, effective communication will presumably need to be informed by what people know about science, how they come to know it, and by what value they attach to science’s distinctive way of knowing . So we start with those.

By way of synthesis of the readings and the “live course” (as opposed not to “dead” but “on line”) discussion of them, I will address these points: (1) measuring “ordinary science intelligence”—what & why; (2) “ordinary science intelligence” & civic competence; (3) “ordinary science intelligence” & evolution; and (4) “ordinary science intelligence” as an intrinsic good.

1. “Ordinary science intelligence” (OSI): what is being measured & why?

There are many strategies that could be, and are, used to measure what people know about science and whether their reasoning conforms to scientific modes of attaining knowledge. To my mind at least, “science literacy” seems to conjure up a picture of only one such strategy—more or less an inventory check against a stock of specified items of factual and conceptual information. To avoid permitting terminology to short circuit reflection about what the best measurement strategy is, I am going to talk instead of ways of measuring ordinary science intelligence (“OSI”), which I will use to signify a nonexpert competence in, and facility with, scientific knowledge.

I anticipate that a thoughtful person (like you; why else would you have read even this much of a post on a topic like this?) will find this formulation question-begging. A “nonexpert competence in and facility with scientific knowledge? What do you mean by that?”

Exactly. The question-begging nature of it is another thing I like about OSI. The picture that “science literacy” conjures up not only tends to crowd out consideration of alternative strategies of measurement; it also risks stifling reflection on what it is that we want to measure and why. If we just start off assuming that we are supposed to be taking an inventory, then it seems natural to focus on being sure we start with a complete list of essential facts and methods.  But if we do that without really having formed a clear understanding of what we are measuring and why, then we’ll have no confident basis for evaluating the quality of such a list—because in fact we’ll have no confident basis for believing that any list of essential items can validly measure what we are interested in.

If you are asking “what in the world do you mean by ordinary science intelligence?” then you are in fact putting first things first. Am I--are we--trying to figure out whether someone will engage scientific knowledge in a way that assures the decisions she makes about her personal welfare will be informed by the best available evidence? Or that she’ll be able competently to perform various professional tasks (designing computer software, practicing medicine or law, etc.)? Or maybe to perform civic ones—such as voting in democratic elections? If so, what sort of science intelligence do each of those things really require? What’s the evidence for believeing that? And what sort of evidence can we use to be sure that the disposition being measured really is the one we think is necessary?

If those issues are not first resolved, then constructing and assessing measures of ordinary scientific intelligence will be aimless and unmotivated. They will also, in these circumstances, be vulnerable to entanglement in unspecified normative objects that really ought to be made explicit, so that their merits and their relationship to science intelligence can be reflectively addressed.

2. Ordinary science intelligence and civic competence

Jon Miller has done the most outstanding work in this area, so we used his self-proclaimed “what and why” to help shape our assessment of alternative measures of OSI.  Miller’s interest is civic competence. The “number and importance of public policy issues involving science or technology,” he forecasts, “will increase, and increase markedly” in coming decades as society confronts the “biotechnology revolution,” the “transition from fossil-based energy systems to renewable energy sources,” and the “continuing deterioration of the Earth’s environment.” The “long-term healthy of democracy,” he maintains, thus depends on “the proportion of citizens who are sufficiently scientifically literate to participate in the resolution of” such issues.

We appraised two strategies for measuring OSI with regard to this objective. One was Miller’s “civic science literacy” measure. In the style of an inventory, Miller’s measure consists of two scales, the first consisting largely of key fact items (“Antibiotics kills viruses as well as bacteria [true-false]”; “Doest the Earth go around the Sun, or the Sun go around the Earth?”), and the latter at recognition of signature scientific methods, such as controlled experimentation (he treats the two as separate dimensions, but they are strongly correlated: r = 0.86). Miller’s fact items form the core of the National Science Foundation’s “Science Indicators,” a measure of “science literacy” that is standard among scholars in this field. Based on rough-and-ready cutoffs, Miller estimates that only 12% of U.S. citizens qualify as fully “scientifically literate” and that 63% are “scientifically illiterate”; Europeans do even worse (5%, and 73%, respectively).

The second strategy for measuring OSI evaluates what might be called “scientific habits of mind.” The reason to call it that is that it draws inspiration from John Dewey, who famously opposed a style of science education that consists in the “accumulation of ready-made material,” in the form of canonical facts and standard “physical manipulations.” In its place, he proposed a conception of science education that imparts “a mode of intelligent practice, an habitual disposition of mind” that conforms to science’s distinctive understanding of the “ways by which anything is entitled to be called knowledge.”

There is no standard test (as far as I know!) for measuring this disposition. But there are various “reflective reasoning” measures--"Cognitive Reflection Test" (Frederick), "Numeracy" (LipkusPeters), "Actively Open Minded Thinking" (Baron, & Stanovich & West), "Lawson's Classroom Test of Scientific Reasoning"-- that are understood to assess how readily people credit, and how reliably they make active use of, the styles of empirical observation, measurement, and inference (deductive and inductive) that are viewed as scientifically valid.

The measures used for "science literacy" and "scientific habits of mind" strike me as obviously useful for many things. But it’s not obvious to me that either of them is especially suited for assessing civic competence. 

Miller’s superb work is focused on internally validating the “civic scientific literacy” measures, not externally validating them. Neither he nor others (as far as I know; anyone who knows otherwise, please speak up!) has collected any data to determine whether his “cut offs” for classifying people as “literate” or “illiterate” predicts how well or poorly they’ll function in any tasks that relate to democratic citizenship, much less that they do so better than more familiar benchmarks of educational attainment (high-school diploma and college degrees, standardized test scores, etc.). Here's a nice project for someone to carry out, then.

The various “reflective reasoning” measures that one might view as candidates for Dewey’s “habit of mind” conception of OSI have all been thoroughly vetted—but only as predictors of educational aptitude and reasoning quality generally. But they also have not been studied in any systematic ways as markers of civic aptitude.

Indeed, there is at least one study that suggests that neither the Miller’s “civic science literacy” measures nor the ones associated with the “scientific habits of mind” conceptioin of OSI predict quality of civic engagement with what is arguably the most important science-informed policy issue now confronting our democracy: climate change. Performed by CCP, the study in question examined science comprehension and climate-change risk perceptions. It found that public conflict over the risks posed by climate change does not abate as science literacy, measured with the “NSF science indicator” items at the core of Miller’s “civic science literacy” index, and reflective reasoning skill, as measured with numeracy, increase. On the contrary, such controversy intensifies: cultural polarization among those with the highest OSI measured in this way is significantly greater than polarization among those with the lowest OSI.

We also discussed one more conception of OSI: call it the “science recognition faculty”.  If they want to live good lives—or even just live—people, including scientists, must accept as known by science many more things then they can possibly comprehend in a meaningful way. It follows that their well-being will thus depend on their capacity to be able to recognize what is known to science independently of being able to verify that, or understand how, science knows what it does. “Science recognition faculty” refers to that capacity.

There are no measures of it, as far as I know. It would be fun to develop some.

But my guess is that it’s unlikely any generalized deficiency in citizens’ science recognition faculty explains political conflicts over climate change, or other policy issues that turn on science, either.  The reason is that most people most of the time recognize without difficultly what is known to science on billions & billions of things of consequence to their life (e.g., “who knows how to make me better if I’m ill?”; “will flying on an airplane get me where I want to go? How about following a GPS?”; “should parents be required to get their children vaccinated against polio?”).

There is, then, something peculiar about the class of conflicts over policy-relevant science that interferes with people’s science recognition faculty. We should figure out what that thing is & protect ourselves—protect our science communication environment—from it. 

Or at least that is how it appears to me now, based on my assessment of the best available evidence.

3. Ordinary science intelligence and “belief” in evolution

Perhaps one thinks that what should be measured is a disposition to assent to the best scientific understanding of evolution—i.e., the modern synthesis, which consists in the mechanisms of genetic variance, random mutation, and natural selection. If so, then none of the measures of OSI seems to be getting at the right thing either.

The NSF’s “science indicators” battery includes the question “Human beings, as we know them today, developed from earlier species of animals (true or false).” Typically, around 50% select the correct answer (“true,” for those of you playing along at home).

In 2010, a huge controversy erupted when the NSF decided to remove this question and another—“The universe began with a huge explosion”; only around 40% tend to answer this question correctly—from its science literacy scale.  The decision was derided as a “political” cave-in to the “religious right.”

But in fact, whether to include the “evolution” and “big bang” questions in the NSF scale depends on an important conceptual and normative judgment. One can design an OSI scale to be either an “essential knowledge” quiz or a valid and reliable measurement of some unobservable disposition or aptitude. In the former case, all one cares about is including the right questions and determining how many a respondent answered correctly. But in the latter case, correct responses must be highly correlated across the various items; items the responses to which don’t cohere with one another necessarily aren’t measuring the same thing.  If one wants to test hypotheses about how OSI affects individuals’ decisions—whether as citizens, consumers, parents or whathaveyou—then a scale that is merely a quiz and not a valid and reliable latent-variable measure will be of no use: if responses are randomly correlated, then necessarily the aggregate “score” will be randomly connected to anything else respondents do or say.  It is to avoid this result that scholars like Jon Miller have (very appropriately, and with tremendous skill) focused attention on the psychometric properties of the scales formed by varying combinations of science-knowledge items.

Well, if one is trying to form a valid and reliable measure of OSI, the “evolution” and “big bang” questions just don’t belong in the NSF scale. The NSF keeps track of how the top-tier of test-takers—those who score in the top 25% overall—have done on each question. Those top-scoring test takers have answered correctly 97% of the time when responding to “All radioactivity is man-made (true-false)”; 92% of the time when assessing whether “Electrons are smaller than atoms (true-false)”; 90% of the time when assessing whether “Lasers work by focusing sound waves (true-false)”; and 98% of the time when assessing whether “The center of the Earth is very hot (true-false).” But on “evolution” and “big bang,” those same respondents have selected the correct response only 55% and 62% of the time. 

That discrepancy is strong evidence that the latter two questions simply aren’t measuring the same thing as the others. Indeed, scholars who have used the appropriate psychometric tools have concluded that “evolution” and “big bang” are measuring respondents’ religiosity. Moreover, insofar as the respondents who tend to answer the remaining items correctly a very high percentage of the time are highly divided on “evolution” and “big bang,” it can be inferred that OSI, as measured by the remaining items in the NSF scale, just doesn’t predict a disposition to accept the standard scientific accounts of the formation of the universe and the history of life on Earth.

The same is true, apparently, for valid measures of the “habit of mind” conception of OSI.  In general, there is no correlation between “believing” in the best scientific account of evolution and understanding it at even a very basic level. That is, those who say they “believe” in evolution are no more likely than those who say they believe in divine “creation” to know what genetic variance, random mutation, and natural selection mean and how they work within the modern synthesis framework.  How well one scores on a “scientific habit of mind” OSI scale—one that measures one’s disposition to form logical and valid inferences on the basis of observation and measurement—does predict both one’s understanding of the modern synthesis and one’s aptitude for being able to learn it when it is presented in a science course.  But even when they use their highly developed “scientific habits of mind” disposition to gain a correct comprehension of evolution, individuals who commence such a course “believing” in divine creation don’t “change their mind” or abandon their belief.

It is commonplace to cite the relatively high percentage of Americans who say they believe in divine creation as evidence of “low” science literacy or poor science education in the U.S. But ironically, this criticism reflects a poor scientific understanding of the relationship between various measures of science comprehension and beliefs in evolution.

4. Ordinary science intelligence as an intrinsic good

Does all this mean OSI—or at least the “science literacy” and “habits of mind” strategies for measuring it—are unimportant? It could only conceivably mean that if one thought that the sole point of promoting OSI was to make citizens form a particular view on issues like climate change or to make them assent to and not merely comprehend scientific propositions that offend their religious convictions.

To me, it is inconceivable that the value of promoting the capacity to comprehend and participate in scientific knowledge and thought depends on the contribution doing so makes to those goals. It is far from inconceivable that enhancing the public’s OSI (as defensibly defined and appropriately measured) would improve individual and collective decisionmaking.  But I don’t accept that OSI must attain that or any other goal to be worthy of being promoted. It is intrinsically valuable. Its propogation in citizens of a liberal society is self-justifying.

This is the position, I think, that actually motivated Dewey to articulate his “habits of mind” conception of OSI.  True, he dramatically asserted that the “future of our civilization depends upon the widening spread and deepening hold of the scientific habit of mind,” a claim that could (particularly in light of Dewey's admitted attention to the role of liberal education in democracy) reasonably be taken as evidence that he believed this disposition to be instrumental to civic competence. 

But there’s a better reading, I think. “Scientific method,” Dewey wrote, “is not just a method which it has been found profitable to pursue in this or that abstruse subject for purely technical reasons.”

It represents the only method of thinking that has proved fruitful in any subject—that is what we mean when we call it scientific. It is not a peculiar development of thinking for highly specialized ends; it is thinking so far as thought has become conscious of its proper ends and of the equipment indispensable for success in their pursuit.

The advent of science’s way of knowing marks the perfection of a human capacity of singular value.  The habits of mind integral to science enable a person “[a]ctively to participate in the making of knowledge,” which Dewey idenfies as “the highest prerogative of man and the only warrant of his freedom.”

What in Dewey’s view makes the propagation of scientific habits of mind essential to the “future of our civilization,” then, is that only a life informed by this disposition counts as one “governed by intelligence.”  “Mankind,” he writes “so far has been ruled by things and by words, not by thought, for till the last few moments of history, humanity has not been in possession of the conditions of secure and effective thinking.” “And if this consummation” of human rationality and freedom is to be “achieved, the transformation must occur through education, by bringing home to men’s habitual inclination and attitude the significance of genuine knowledge and the full import of the conditions requisite for its attainment.”

To believe that we must learn to measure the attainment of scientific habits of mind in order to perfect our ability to propagate them honors Dewey’s inspiring vision.  To insist that the value of what we would then be measuring depends on the contribution that cultivating scientific habits of mind would make to resolution of particular political disputes, or to the erasure of every last sentimental vestige of the ways of knowing that science has replaced, does not.

Reading list.

 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (9)

“….Indeed, there is at least one study that suggests that neither the Miller’s “civic science literacy” measures nor the ones associated with the “scientific habits of mind” conception of OSI predict quality of civic engagement with what is arguably the most important science-informed policy issue now confronting our democracy: climate change……………cultural polarization among those with the highest OSI measured in this way is significantly greater than polarization among those with the lowest OSI….”

So…the more scientific literate one is, the more one may tend not to accept climate change as a serious issue.

I work in engineering and I have yet to talk to a practicing engineer or engineering tech that does not believe the case for anthro caused climate change is over stated. This does not apply to academia though as I do know several engineering professors that do believe anthro caused climate change may be a serious issue. Granted, I have not talked to practicing engineers directly involved in the sale, design, and installation of renewable projects. They may have a different view on the importance of acting now to reduce CO2 to reduce climate change.

As an aside, change the topic from global warming to GMO’s, nuclear power, or vaccinations and see how the different groups change sides on their view of who “denies the science”.

January 28, 2013 | Unregistered CommenterEd Forbes

@Ed: The finding was that the more science literate & numerate one was, the more one either credited or discounted climate change risks conditional on one's cultural outlooks. I.e., the impact of science literacy depends on who you are. So it is actually a misreading of the study results to say that as one becomes more science literate one doubts climate change risks more -- the "main effect" in the sample will depend entirely (and thus arbitrarily) on how many of each type of person are in the sample!

In generaly, I think no cultural group is more or less pro-science (or science literate) than any other. In fact, they are all very pro-science, which is what makes disputes over policy-relevant science quite intriguing-- and super sad...

--Dan Kahan

January 28, 2013 | Registered CommenterDan Kahan

I note the paper on climate change you linked has risk perceptions. There is an aspect of this that needs consideration: conflating uncertainty and assumptions and this impact on risk perception. Especially in communicating in a non adversarial manner.

From Miller: The “number and importance of public policy issues involving science or technology,” he forecasts, “will increase, and increase markedly” in coming decades as society confronts the “biotechnology revolution,” the “transition from fossil-based energy systems to renewable energy sources,” and the “continuing deterioration of the Earth’s environment.” I have to ask, is it always an assumed output that there is a requirement to do something as implied here? Confronts, transition, continuing deterioration of Earth's environment. Why confront, when we might welcome? Why believe that we will transition to renewable, since we have yet to develop the technology or the subsystems to do this transition? Why look at the deteriorating part when we have shown by measurement that in the US air, soil, water, and air in general are getting better? I find that not only can this be seen as confrontational, it also indicates the solutions are determined by one's risk perceptions. From the CC paper, this looks to be "picking a fight."

I see the problem you face consists of measurement. Not just what you want to measure, but what measurement means wrt assumptions and uncertainties. Each of the Miller items has assumptions and uncertainties that would effect someone's conclusions about the measurement or facts being presented. Take the transition to renewables, how would you word this question? One could choose a system as the CC paper did, but other than tell you a tendency of how certain individuals are predisposed, but that will not tell you whether you are right or not, as I point out we may never transition to renewables, thus it cannot be the right answer if that is what happens. Another is the supposition that citizens are not taking CC as seriuosly as scientists think they should. Looking at it from risk perception and the individualist and communitarian, I would conclude that the scientists fit in the communitarian, and the public in the general sense sees this, and correctly concludes that they should lean towards the individualists. Should this measurement you are trying to produce not account for that persons will look at the scientists as "ivory tower" and find themselves more inclined to support the risk preceptions of the individualists due not to right or wrong per se, but what they beleive of the risk stance of the parties giving the information?

This does not get us to the next level of measurement. As one learns more of a science subject, in general, one gets more nuanced understanding until one develops, internally, estimates of how certain the information is. With a subject like CC, the level of uncertainty is high. For risk, what assumptions are known, or unknown, can effect one's perceptions. An example of this is , the number of assumptions are high, the individual who goes not to AR4 for knowledge and nuance, but a blog where "realists" state 1C for doubling is impossible but bang on about 10 C for doubling. They are both low probability events. But culturally, they find agreement or disagreement. Compare this to a person who does read and understand the uncertainty and assumptions in AR4 who reads this and then reads what some of the activists say in the news media where it is "alarmist." They may see as I do that the science has not been communicated well in the first place, and see both blogs and the sceintists, if not the science itself skewed.

So, the ability to do science will not mean that one will conclude that we should do something now. One may conclude that a good policy cannot be formed at this time. This presents a cultural problem in that, use a tester or even a proffessor studying behavoir, they conclude that there must be some reason they the other cannot see the seriousness of climate change; but the correct answer is that the wrong question is being asked, or the wrong policy is being supported. Such as the example of transition to renewbles: different assumptions about capabilities can give different answers, and to conclude this is a lack of science in some manner on the part of the responder will be wrong. So, how will you measure your questions, or tests for this on this secondary level of understanding?

January 28, 2013 | Unregistered CommenterJohn F. Pittman

@John:
Let's separate 3 things:

a. political dispute over what policy to adopt on climate change or any other of the issues Miller alludes to;

b. a state of persistent public polarization not over policy outcomes but over state of the evidence relevant to any policy anyone might spport on these issues; and

c. the attainment of a high degree of "ordinary science intelligence."

I think (a) is perfectly normal and nothing to worry about. The healthiest, most science literate liberal democracy will have (a) b/c "what to do" is a matter of value, and people in a liberal society inevitably will value different things. Figuring out what to do when people about what states of affairs is best is what democracy is for.

I think (b) is a disaster. Diverse values do not imply diversity of facts; indeed, in a state where there is not a confident basis for saying what the facts are, people will not be able to make intelligent decisions about which policies best reflect their values. If (b) exists even when in fact the state of the evidence admits of clear articluation (and even if what is clear is ''things are uncertain"; it's possible for people to agree about that!), then enlightened democracy is impeded, and for everyone, regardless of their values.

I'm sure (c) is good. I'm also sure it won't dispel (a) and shouldn't be expected to.

It is possible that w/o (c) there will be more frequent conditions of (b). But I don't think (c) by any means protects a democracy from (b). Indeed, I don't think the most conspicuous instances we have of (b) today are caused by failure to attain (c).

Agree? Disagree? Some of each?

January 28, 2013 | Registered CommenterDan Kahan

(a) I agree with. (c) I would say is a goal of an enlightened democracy, and that I also agree with.

(b) Reading it makes me wonder if you are committing the 20-20 hindsight error in reverse. My reasoning: facts are not knowledge. Knowledge, especially scientific, is a construct, not just facts. Otherwise that iterative and contemplative sequence that yields science you speak of, would be reduced to simple fact gathering. This is the thrust of high assumption and high uncertainty science. It is furthest from simple fact gathering.

I would disagree with you in this way: facts are a necessary condition, but not sufficient for scientific knowledge. This can and should be said for science whose construct has many assumptions and high uncertainties. It is still science and may well be excellent science, with good scientists coming to what appear to be 180 degree different conclusions. One would expect a state of persistent public polarization, not over policy outcomes, but over state of the evidence in such instances. The reason is that the "real" answer is unknown especially if you stick to the motiff of "facts." An example: TCS is 1.4, not 2.0. We establish this as fact. Where it becomes knowledge in this example would be whether a TCS of 1.4 meant a goal of 450 ppm in year 2150 versus 350 ppm in year 2050 for the same policy implementation wrt to the same temperature increase of 2C. The answer would depend on which assumptions were shown to be true, not on the fact that both yeild a 2C rise.

I would say with humans use diverse potential facts in the values frame because the facts are not actually known.

You state:"indeed, in a state where there is not a confident basis for saying what the facts are, people will not be able to make intelligent decisions about which policies best reflect their values." People do not always do this, and may be unable to do this. What they do in cases of high assumptions and uncertainty is make decisions that best reflect their values because the actual answer is not known. And often this is from their risk perspective. That the result is not always intelligent, I do not disagree. In fact, my problem is that I agree with you so much about not being able to make an intelligent decision is why I tend to be a CC obstructionist. The conclusion I get from studying is we should have no more confidence about our ability to measure and predict CC at this point than to have a policy to go for the "low hanging fruit." Any other policy is likely to be a disaster. This conclusion is not in opposition to that CO2 should be expected to cause some warming since that is a simple fact. The problem is our inability to directly measure and our inability to predict effects. The ability to measure and predict are typically accepted products of science. In fact, if one cannot do these two, how does one come up with an intelligent policy based on science? High assumptions means we cannot be sure of our measurement though one can have a floor to ceiling CI and thus such cannot be used for supporting knowledge or policy that needs a discriminant answer. High uncertainty means we cannot be sure of our predictions, in which case the Santer quote that the recent pause is not inconsistant with models should tell the reader that the models are literally from floor to ceiling and thus not useful. An intelligent decision in science should have a discernible answer and a useful construct (model) in which to frame it.

You said: Indeed, I don't think the most conspicuous instances we have of (b) today are caused by failure to attain (c). I have to ask in the case of (b), what does (c) have to do with it? But for (c), cannot and should not a high degree of OSI yeild more polarization or more impedence in an enlightened democracy in the case of (b)? therwise this would be a failure of OSI to discriminate actual knowledge from a litany of facts.

Your thoughts?

January 28, 2013 | Unregistered CommenterJohn F. Pittman

It will take more time, (haven't had much lately), to digest this...

After a quick read.. some quick questions come to mind (perhaps after I read more closely, I won't have those same questions?)

I would imagine that you must have read Frames of Mind. I wonder if you think it is useful to reconcile the notion of OSI with the notion of multiple intelligences?

If you accept the basic framework of multiple intelligences, do you see some sort of hierarchy amongst different types of intelligence (that might correlate with proficiency in evaluating civic competence, say)?

What attributes distinguish scientific intelligence from other forms of intelligence?

Given that the validity of a test is the extent to which it measures what it is intended to measure, do you have a definitional problem? If the goal is to determine what attributes underlie ability to contribute to good civic policy, it seems to me you'd need to have a working definition of what comprises good civic policy. How do you objectively determine what good civic policy looks like?

January 28, 2013 | Unregistered CommenterJoshua

Another test of Scientific Literacy. (I can't find the test itself, but the description of it looks promising except for the excessive effort to develop a consensus view.) (I could not find this in your syllabus.)


http://www.lifescied.org/content/11/4/364

P.S. on the last comment. I have read "Frames of mind" and I think it is designed to make everyone feel good: everyone can be good at something. It accomplishes this by totally ignoring (what is now) 109 years of research on the g factor in intelligence. I'm not saying that this research is absolute truth. Rather, I think that Gardner (and others in his tradition) had a responsibility to say why it isn't. Pretending it doesn't exist is not the way to do that.

January 29, 2013 | Unregistered CommenterJon Baron

Yes.

I was thinking that a 3-d box or multiple 3-d boxes similar to the 2-d boxes in the Kahan et al Nature article.

I do think there are at least potential definitional problems that relate to our ability to measure. In the comment you have in the post above "What would I advise..." stakeholder dialog and the fact that in general science communication has a structure not just with words but with groups as well.

To go forward, I think we would need to discuss that comment, because I agree with your observation, but also would point out an answer to be determined. What constitues a scientific expert opinion? Within your observed framework and the citizens capacity to determine who knows what about what, where would the following lie? Citizens, who have determined who knows what about what, disagree with the "consensus."

The reason I ask is that to make a good (useful) measurement one often needs the limits of what can be measured for whatever reason, and how precise and accurate is that measurement. If the science of science communication is going to be more than a pass/fail, discernement will be needed, IMO.

January 29, 2013 | Unregistered CommenterJohn F. Pittman

JB -

I have some major issues with Frames of Mind also (not quite the same as those that you have); I certainly don't take it as gospel. But I can't agree that it "ignores" research on the g factor in intelligence, and I believe that it does discuss some considerations of intelligence that, while perhaps not as groundbreaking as Gardner seemed to believe, are fundamentally true and important to consider. An argument for different forms of intelligence does not negate the more traditional notion of logico-mathematical intelligence - along the lines of what we might typically consider to be what intelligence tests measure.

This means a lot to me because as an educator, I have seen how a chauvinistic approach to what comprises "intelligence" has a deleterious impact on many students in our educational system. It is not a coincidence that as a society, the functional implications of how we define intelligence dovetails nicely with the types of skills and abilities that perpetuate the existing socio-economic status quo. I have worked with many students who excel in tasks typically considered to be evidence of "intelligence," but who fail at woodshop or lack the kind of intelligence it takes to have positive interactions with their peers (or adults) or who can't come to school with matching socks. I have worked with students who fail at the tasks typically considered to be essential measures of intelligence, but who could easily take all your money at Three Card Monte before you even knew what hit you. My guess is that Larry Bird might not do particularly well on an IQ test but I admire his genius on the basketball court (such as being able to tell you at any given split-second where every player is on the court and how to best leverage their positioning to his advantage). A sailor from a traditional society may not do well on an IQ test, but I'd trust his genius over that of Stephen Hawking to assimilate myriad variable conditions, such as the feel of the current, or the color of the sky, or the smell of the air, to navigate our way home at night in an open canoe.

Attempt to make everyone feel good? I will postulate that you are making huge assumptions in reaching such a conclusion. No doubt, there is "motivated reasoning" that impacts upon Gardner's analysis, but such a simplistic interpretation as his work being "designed to make everyone feel good," seems just as likely, to me, to be a reflection of your motivated reasoning than it is a reflection of his. My guess is that you have a pre-existing ideological bent associated with a "bigotry of soft expectations," or objections to multi-culturalism, or concern about a loss of societal standards, or distrust of "feel-good libruls" who just don't accept that some people are less capable than others, or a supposed growth in narcissism among youth, etc. Might I be right in my speculation?

January 29, 2013 | Unregistered CommenterJoshua

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>