follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Science of Science Communication 2.0, Session 2.1: What is science literacy? And what is it for? | Main | Weekend update: More on HPV vaccine disaster (Science of Science Communication 2.0, Session 1) »
Sunday
Jan182015

What does & doesn't count as valid evidence of "ideologically motivated reasoning" & "symmetry" of the same

A friend wrote to me posing a question, implicit in which are a bunch of issues about what counts as valid evidence of motivated reasoning & the symmetry of it across the left-right "spectrum" -- or cultural worldview spectra.  

I've addressed many of these issues, most more than once, in various posts.

 Indeed, I have on many occasions noted that almost everything I say is something I've said before, including that I've already noted on many occasions that everything I have to say is something I seem already to have said before. I think this is just sort of normal actually, when one is engaged in a sort "rolling conversation" w/ a fuzzy set of participants whose aim is reciprocal exchange of ideas for mutual enlightenment, & I should just stop talking about this.

That's the first time I've said that, but I'm sure not the last....  

But in any case, I thought I'd share this particular exchange. Maybe it will be clearer or more accessible than some of the others or simply increase the likelihood that someone who can get value from my views (very possibly by being able to see more clearly why he or she thinks I've made an error!) will find & get value out of these reflections on the nature of what sorts of study designs support inferences on "ideologically motivated reasoning" and asymmetry.

My friend:

I want to say that your research has found that more numerate people are more biased on both ends of the political spectrum, but my recollection is that what you find is actually that more numerate people do not believe more in the reality of climate change.  My question is: Have you looked at the interaction – e.g., done a median split on numeracy and then compared the polarization graphs between the numerate and innumerate?

Me:

I don't think what you've asked me to show you can support the inference that any particular form of reasoning proficiency ("science literacy," "numeracy," "cognitive reflection" etc.)  magnifies ideologically motivated reasoning "symmetrically" (let's say) with respect to ideology. 

But I'll show you what you asked to see first, before explaining why.

A. "Looking at" the magnification of polarization conditional on science comprehension

There are a variety of ways to graphically display what you are asking for--a view, essentially, of the differential in polarization at different levels of reasoning proficiency.

I think what I've done below -- splitting the "ideology" predictor & doing a lowess for each 1/2 of the sample separately in relationship to science comprehension -- is the best, or better in any case than splitting both continuous measures and giving you two pairs of %'s (for left-leaning & right-leaning at "below" & "above" avg); this way you get the information benefit of the continuous science-comprehension measure.

 

These are the same data from your slide 7.

With the lowess, one can see pretty readily that the gap between "left-" & "right-leaning" respondents gets progressively larger from -1 SD (16th percentile) to +1 SD (84th) on OSI_2.0 & then pretty much levels out.

(As you know, "OSI_2.0" is a 1-dimensional science-comprehension measure that consists of "science literacy," numeracy & cognitive reflection items. It was formed using a 2PL Item response theory model. For details, see 'Ordinary Science Intelligence': A Science Comprehension Measure for Use in the Study of Science Communication, with Notes on 'Belief in' Evolution and Climate Change.). 

B. But what are we looking at? 

So if one is trying to get the practical point across, this justifies saying things like, "polarization tends to increase, not abate, as individual with opposing political or cultural outlooks become more proficient in making sense of scientific information" etc.

But one can't on this basis infer that motivated reasoning is being magnified conditional on reasoning proficiency-- or as you put it, that "more numerate people are more biased on both ends of the political spectrum."

The question-- is human-caused climate change occurring?-- is a factual one that presumably has a correct answer. Thus, one "side" -- "liberals" or "conservatives" -- is presumably becoming more likely to get the correct answer as reasoning proficiency increases.

It is thus possible that motivated reasoning is increasing conditional on reasoning proficiency for the side that is becoming more likely to get the "wrong answer" but dissipating conditional on reasoning proficiency for the side that is becoming more likely to get the right answer!

That inference isn't logically compelled, of course.  If one is predisposed to believe something that happens to be true, then motivated reasoning will increase the likelihood of "getting the right answer."

But in that case, your getting the right answer won't prove you are smart; it will show only that you were lucky, at least on that particular issue.

The point, though, is that the evidence we are looking at above is equally consistent with the inference that motivated reasoning is being magnified by enhanced reasoning proficiency and the inference that ideologically motivated reasoning is "asymmetric" with respect to ideology. 

C. Observing what we really are trying to figure out

There is, I think, only one way to determine whether greater polarization conditional on greater reasoning proficiency is being caused by an ideologically symmetric (more or less) magnification of motivated reasoning: by looking at how people reason independently of whether they are getting the right answer.

What we need to see is how biased or unbiased the reasoning of those on both "sides" is as each side's members display greater reasoning proficiency.

I'll show you results from two studies that bear on this.

1. Motivated system 2 reasoning 

In the first (Kahan 2013), subjects evaluated "evidence" of the validity of the cognitive reflection test as a measure of "reflective & open-minded" reasoning. The experimental manipulation was the representation that those who score higher are more likely or instead less likely to accept evidence of human-caused global warming.

One might like to use a valid test of reflective reasoning (particularly an objective, performance based one like CRT, say, as opposed to the self-reporting ones like "need for cognition" that a fair number of researchers persist in using despite their dubious validity) to test the oft-asserted claim that "right-leaning" individuals are more "dogmatic" and "closed minded" etc. than "left-leaning" ones.

But if one is moved to selectively credit or discredit evidence of the validity of an open-mindedness test based on whether it supports the conclusion that those who share one's own ideology are "more open-minded and reflective" than are one's ideological opposites, one's own ideologically motivated reasoning will prevent one from being able to carry out such a test.

Indeed, if both "left-leaning" and "right-leaning" individuals display this effect, then we can conclude that both side are disposed to be unreflective & closed-minded in their thinking on this very issue.

That's what we see, of course:

If you can't make this out, click for enlarged version (and then by all means book an appointment with an optometrist!)

How likely subjects were to credit the evidence that the CRT was "valid" in this study was conditional on the interaction of experimental treatment and ideology: that is, the more conservative one becomes, the more likely one is to conclude the CRT is valid in the "believers score lower" condition and invalid in "skeptics score lower" one; vice versa as people become more liberal.

This inference doesn't depend on the CRT being an "open-mindedness" measure: the design would have worked just as well if the measure whose validity was being assessed was a fictional "open-mindedness" one.

Nevertheless, the effect observed is conditional on the subjects' CRT scores: the higher their score, the more likely they were to display this bias.

Accordingly, we can see that the form of critical-reasoning proficiency measured by CRT is indeed magnifying ideologically biased information processing.

2. Motivated numeracy

In the second study (Kahan et al. 2013), subjects performed a standard "covariance detection" problem.

The problem is diagnostic of a tendency to over-rely on heuristic reasoning of the sort that involves trying to infer causation from examining  the differential between the number of "got better" & "got worse" in only the fictional-study "treatment" condition.  That approach generates the wrong answer.  

As expected, in the control condition of our study (effectiveness of a new skin-rash treatment), there was a very strong correlation between correctly answering the problem and numeracy (indeed, one has to be at about the 90th percentile before there is even a 70% chance of getting it right).

But in the treatment condition of our study -- in which the outcome of a "gun control ban" experiment was manipulated-- subjects higher in numeracy were more likely to get the correct answer only if the data, properly interpreted, supported the position consistent with their own ideology.

So again, we see that critical reasoning is being used opportunistically: evidence is being assessed correctly depending on whether it supports an ideologically congenial outcome.

We see, too, that critical reasoning is magnifying this effect; subjects who are higher in numeracy are reliably summoning their greater proficiency to resist heuristic reasoning only when heuristic reasoning generates an uncongenial answer.

And finally, we are seeing that the effect is symmetric. Or in any case, we see it in both the left and right. (If you want to discuss whether it is "bigger" in the right, I'm happy to go into that; the answer is actually "no"-- although many people perceive that)!

3. Lesson: Manipulate motivating stake, measure "likelihood ratio"

Note that in both of these experiments, the "high proficiency" are "more polarized" than their "low proficiency" counterparts. They are "better," more reliable, in fitting the evidence to their ideological predispositions.

If this is how people process information in the world, then we will see, in an observational study, that those higher in proficiency are more polarized. We'll be able to infer, moreover, that this is the result of the magnification of biased information processing & not a result of one "side" getting the "right answer."

Indeed, the whole point of the experimental design was to unconfounding quality of reasoning from "right answer." This was done, in effect, by manipulating the motivational stake of the subjects to credit one and the same piece of evidence.

In Bayesian terms, we are demonstrating that subjects opportunistically adjust the likelihood ratio in response to an identity-protective motivation unrelated to assessing the truth-value weight of the new information (Kahan 2015).

This is a more general point: studies that purport to show "motivated reasoning" or "biased assimilation" by looking at the equivalent of "posterior probabilities" are almost always invalid. They are consistent with the inference that "one side is biased," or even that "neither side is," because differences in opinion after review of evidence is consistent with different priors or pre-treatment effects (consideration of the evidence in question or its equivalent) before the study. One should manipulate the stake the subjects have in the outcome & assess how that effects the likelihood ratio assigned to one and the same piece of evidence -- conceptually speaking (Kahan 2013).

References

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (12)

So what does this mean, that cultural bias is even worse amongst the more intelligent than the less intelligent?

It all sounds frighteningly similar to what researchers in human psychology are now finding:

Greene used fMRI to show that emotional responses in the brain, not abstract principles of philosophy, explain why people think various forms of the "trolley problem" (in which you have to choose between killing one person or letting five die) are morally different....

Wilson predicted in Sociobiology: that the old approaches to morality, including Kohlberg's, would be swept away or merged into a new approach that focused on the emotive centers of the brain as biological adaptations. Wilson even said that these emotive centers give us moral intuitions, which the moral philosophers then justify while pretending that they are intuiting truths that are independent of the contingencies of our evolved minds....

Studies of everyday reasoning show that we usually use reason to search for evidence to support our initial judgment, which was made in milliseconds.

http://edge.org/conversation/moral-psychology-and-the-misunderstanding-of-religion

January 20, 2015 | Unregistered CommenterGlenn Stehle

@Glenn:

to figure out if form of information processing reflects bias or some other shortcoming in rationality, one has to have a defensible account of what someone should be understood to be attempting to accomplish by engaging the informatoin in question.

If in particular culturally motivated reasoning is *magnified* by every exercise of every reasoning proficiency we have managed to be able to measure and form a reasonable underestanding of, then we ought to consider that forming culturally congenial beliefs *is* one of the things people *do* with information.

Tell me why exactly it makes more sense for someone whose own personal behavior -- as consumer or voter or conversant in public conservatation -- will necessarily have zero impact on, say, risks posed by climate change or by poorly considered responses to the same should devote a tremendous amount of time and energy to forming beliefs that are consistent with best available science? No mistake she makes about best evidence in any of those capacities will affect risk she or *anyone else* faces.

Now tell me why you'd imagine someone whose social competence among his closest peers -- the people in his everyday life whose confidence in him will determine their willingeness to extend him material and other vital forms of social support -- wouldn't develop style of information processing that brings his beliefs into alignment with theirs on issues in which position signifies memership in & loyalty to their group?

Seems perfectly rational to me in that situation that individuals will use their reason to form identity-protective beliefs. And that those who are higher in one or another reasoning proficiency will do this all the more successfully.

also perfectly clear that in a world like that, the collective interest of all will suffer, precisely b/c of the misalignment of private interest in forming identity-protective beliefs and public interest in having democratic citizens converge on beset evidence on issues that affect common welfare.

The problem in that situation, though, is "not stupid people; it is a polluted communication environment, stupid!"

January 21, 2015 | Registered CommenterDan Kahan

@ Dan Kahan

When I think of "science," I normally think of a transcendental quest for objective truth, and not the parochial group-think of groups. Here's how the philosopher of science and avowed "objectivist" in the "Science Wars," Paul Boghossian, put it in Fear of Knowledge:

The intuitive view is that there is a way things are that is independent of human opinion, and that we are capable of arriving at belief about how things are that is objectively reasonable, binding on anyone capable of appreciating the relevant evidence regardless of their social or cultural perspective.

As Boghossian explains, however, there is a long tradition of "anti-objectivist conceptions of truth and rationality" that run through Western philosophy like a thread, from Kant and Hume, to Nietzsche, and right on down to the twentieth century with "Ludwig Wittgenstein, Rudolf Carnap, Richard Rorty, Thomas Kuhn, Hilary Putnam and Nelson Goodman, just for example."

One of the hurdles which the objectivists face is that entirely too many scientists, all in the name of objectivism of course, have used science to craft what the philosopher of science Stephen Toulmin calls "cosmopolitical arguments." As Toulmin explains in Cosmopolis:

The function of cosmopolitical arguments is to show members of the lower orders that their dreams of democracy are against nature; or conversely to reassure the upper class that they are superior by nature.... [T]he Newtonian view [was] of a stable system "kept in order" by universal and unchanging central forces. In the social realm, the Newtonian view called for stable institutions, unambiguous class structure, centralized power, and defense of the state's sovereign autonomy from external interference.

For a great case study of how cosmopolitical arguments work in the real world, I recommend Maria Elena Martinez's Genealogical Fictions. As she demonstrates, the arrival of The Enlightenment and The Age of Reason to Mexico, brought by the enlightened Bourbon monarchs beginning in the 18th century, proved devastating to the well-being of the indigenous populations of Mexico, perhaps even more so than the ideologies of the Spanish conquistadors. As she explains:

As scientific explanations to sexual and racial differences gained ground over religious ones, colonial Mexico's population became subject, like the animals and plants in natural histories, to increasingly elaborate and visual taxonomic exercises that made the gendering of race and racing of gender as well as social hierarchies seem to be ordained by nature.

And just imagine, Ayn Rand, an absolutist secular stealth religionist par excellence, was deluded enough to call her stealth religion "objectivism."

But Rand was far from being unique. The bright red line in the sand that supposedly separates the modern Age of Reason from the ancient world of religion and superstition, the conviction that we can "scrap our inherited systems of concepts and start again from scratch -- with a clean slate -- using 'rationally validated' methods," as Toulmin explains in Cosmopolis, is nothing but a Modernist myth. Toulmin relates the following story to illustrate his point:

Once rationalism raised the intellectual stakes, Catholics could not go on playing by older, more relaxed rules: if formal rigor were the order of the day in physics and ethics, theology must follow suit. Confronting the Protestant heretics on the one side, and skeptical deists on the other, the theologians decided: "If we can't join them, let us beat them at their own game."

In the Library of the Convent of Ste. Genevieve, near the Pantheon in Paris, is a manuscript entitled Traité sur l'autorité et de la réception du Concile de Trente en France. It describes the struggle, after the Council of Trent, to uproot the "pernicious heresies and errors" of Protestantism, and paints a revealing picture of the intellectual position of the Catholic Church in early 18th-century France. The whole argument is an example of history written retrospectively: It begins, "The Council of Trent was summoned to root out the errors of Luther"; and its final pages show how far the demand for "undeniable foundations" had made its way into Catholic theology by 1725. Looking back, the author credits the Council with anachronistic motives, which are intelligible only if already, in the 157'0s, it could invoke the principles of a philosophical rationalism that was invented in the 1630s. The ambition of the Counter-Reformation, it tells us, was "to prove invincibly our most fundamental belief."

One of Descartes' ideas was to the desists' tastes, Toulmin concludes, and that was "his insistence on the need for certainty."

January 22, 2015 | Unregistered CommenterGlenn Stehle

Glenn,

I don't understand your comment. Are you arguing for or against objectivism? And do you mean the scientific aim to be 'objective', or do you mean the Randian philosophy Objectivism which is something quite different and shares only a name with it? This argument sounds a bit like equivocation.

That there have been philosophers who argued against objective reality is not evidence that their position has any merit. Nor is it evidence that some have so argued for political purposes.

And the issue with the theologians attempt to use rigorous logic in defence of their beliefs (which dates back to long before the Enlightenment and modernism, as my Thomist friends would confirm) is not that they were anachronistic, but that they didn't understand logic.

They used Cargo Cult logic - which has the appearance and general flavour of logic, but misses out what one might call the essential ingredient. A lot of intellectuals and academics today do the same thing.

January 22, 2015 | Unregistered CommenterNiV

@NiV

What I'm arguing is that objectivism, and not groupism, is what science should aspire to.

As Rogers Brubaker asserts, "Invoking groups," "evoking groups," "summoning them," "calling them into being" and "reifying them" is what political and economic entrepreneurs do. It is not what scientists ought to do.

As Brubaker goes on to explain:

Ethnic common sense -- the tendency to partition the social world into putatively deeply constituted, quasi-natural intrinsic kinds (Hirschfeld 1996) -- is a key part of what we want to explain, not what we want to explain things with; it belongs to our empirical data, not to our analytical toolkit. Cognitive anthropologists and social psychologists have accumulated a good deal of evidence about common-sense ways of carving up the social world -- about what Lawrence Hirschfeld (1996) has called 'folk sociologies'. The evidence suggests that some common sense social categories -- and notably common sense ethnic and racial categories -- tend to be essentializing and naturalizing (Rothbart and Taylor 1992; Hirschfeld 1996; Gil-White 1999). They are the vehicles of what has been called a 'participants' primordialism' (Smith 1998: 158) or a 'psychological essentialism' (Medin 1989). We obviously cannot ignore such common sense primordialism. But that does not mean we should simply replicate it in our scholarly analyses or policy assessments. As 'analysts of naturalizers', we need not be 'analytic naturalizers' (Gil-White 1999: 803).

January 23, 2015 | Unregistered CommenterGlenn Stehle

Ah, I see. I think what you're arguing against is the use of argument ad populam - as exemplified by 'consensus' arguments, and "Everybody knows..." arguments. (The reference to Rand was what confused me - I guess that was just a brief diversion off topic and not relevant to the point.)

I agree.

January 23, 2015 | Unregistered CommenterNiV

@ NiV

I figured any disparaging remarks about Saint Rand would get your dander up, and you certainly didn't disappoint.

Ayn Rand was a fundamentalist religious fanatic who completely lost touch with factual reality, all in the name of "objectivism" of course, as David Sloan Wilson explains in this lecture at The Science Network.

January 23, 2015 | Unregistered CommenterGlenn Stehle

I wasn't bothered by what you said about Ayn Rand - I was confused by it. You seemed to be talking about 'objectivism' in the sense of 'scientific objectivity' - I think the philosophy is more commonly called Scientific Realism but I could sort of figure out what you meant - and speaking in support of it. Then you suddenly switch to Rand's Objectivism, which is a completely unrelated political/moral philosophy, and in a way that clearly indicated you was against it. So was you arguing for or against, and for or against what? Was there supposed to be some connection I wasn't seeing, or was it as it appeared a complete non sequitur?

That you'd not like Ayn Rand is pretty much a given - and I expect she'd have regarded your opinion of her philosophy as that of just another looter and had a similar opinion of yours. Your spite is understandable, given what she said about your sort. There's no novelty in that. But it's sometimes worth listening to people who disagree, as they're more likely to know of the best arguments against a position.

"...as David Sloan Wilson explains in this lecture at The Science Network."

Do you have a briefer summary of his argument? I don't intend to waste half an hour of my life watching your video if it's just the usual nonsense.

January 24, 2015 | Unregistered CommenterNiV

@ NiV

So let me get this straight.

First you tell us that "it's sometimes worth listening to people who disagree, as they're more likely to know of the best arguments against a position."

And then, no sooner than having said that, you turn right around and say: "Do you have a briefer summary of his argument? I don't intend to waste half an hour of my life watching your video if it's just the usual nonsense."

I gotta give it to you, you take cognitive dissonance to an entirely new level.

And furthermore, Sloan's talk is key, because in it he describes in detail why Ayn Rand made such a mockery of the word obectivism.

And by the way, if you will read my comments carefully you will see it is Paul Boghossian who uses the term objectivism and gives it a very specific definition. When I use the word objectivism, I use it to mean the same thing Boghossian did. But what Ayn Rand called "objectivism" is actually the antithesis of what Boghossian calls objectivism, as it in no way, form, or fashion resembles what Boghossian meant when he used the word objectivism.

January 24, 2015 | Unregistered CommenterGlenn Stehle

"I gotta give it to you, you take cognitive dissonance to an entirely new level."

It's sometimes worth listening, it's often not. There's a balance between costs and benefits to be struck. The odds of the lecture being worth my time are very slim, but I'd be willing to spend a few minutes skimming such an argument just in case.

Unfortunately, when people give me links to thousand page documents or hours-long videos saying "my argument/evidence is somewhere in there", it usually turns out to be some brief and unsupported assertion buried in reams of irrelevant ranting. Life's too short.

If you've got a good argument, then you can give a brief summary setting out the essential points. If you've not, don't expect me to go hunting in your haystack for it.

"But what Ayn Rand called "objectivism" is actually the antithesis of what Boghossian calls objectivism, as it in no way, form, or fashion resembles what Boghossian meant when he used the word objectivism."

Ah! Marvellous! So he redefined 'objectivism' to mean something completely different, and then criticised Rand's philosophy for not fitting his definition! The chutzpah needed to do that with a straight face is impressive!

No, I don't think I'll bother watching.

January 24, 2015 | Unregistered CommenterNiV

@ NiV

"When people give me links to thousand page documents or hours-long videos"?

Exaggerate much?

The talk by David Sloan Wilson is 30 minutes long.

Something, however, tells me it wouldn't make any difference how long it is. Your defense mechanisms are so well fortified that they appear capable of filtering out any evidence which doesn't conform to your ideology, whether presenting that evidence takes 30 seconds or 30 hours.

January 24, 2015 | Unregistered CommenterGlenn Stehle

The 30-second version was what I was asking for.

And I've already changed my mind about something on the basis of something you said. I found your earlier discussion of group-level selection interesting, and while I didn't think the argument as presented was valid, I could see ways to extend it to make it so. I think I said so, too, and thanked you.

My 'defense mechanisms' are strong because I've been having these discussions for many years, I've seen most of the arguments before, and have either already changed my mind on them or know the answers. I still make a conscious effort to be open-minded, but it will take a strong, coherent, and logical argument to get me to do so - I'm not going to be swayed by post-modern Marxist obscurantism, empty but intellectual-looking quotations from 'authorities' I don't recognize as such, or invitations to view the same at length in video lectures.

My objection is not really about the time required to watch the video, (although I don't discount that), it's actually a check to see if there's a genuine argument here, or obfuscation. If there's a genuine argument, it's usually possible to condense it to a one-paragraph summary that makes the logical necessity clear. If that seems plausible, then you can go and look at all the details. But people who don't have a real argument can sometimes give the impression that they do by burying it in complexities and convolutions. Or as Locke put it:

Nevertheless, this artificial ignorance, and learned gibberish, prevailed mightily in these last ages, by the interest and artifice of those who found no easier way to that pitch of authority and dominion they have attained, than by amusing the men of business, and ignorant, with hard words, or employing the ingenious and idle in intricate disputes about unintelligible terms, and holding them perpetually entangled in that endless labyrinth. Besides, there is no such way to gain admittance, or give defence to strange and absurd doctrines, as to guard them round about with legions of obscure, doubtful, and undefined words. Which yet make these retreats more like the dens of robbers, or holes of foxes, than the fortresses of fair warriors: which, if it be hard to get them out of, it is not for the strength that is in them, but the briars and thorns, and the obscurity of the thickets they are beset with. For untruth being unacceptable to the mind of man, there is no other defence left for absurdity but obscurity.

If somebody is unable to give a short and clear summary of their argument, then they're either citing something they don't themselves understand, or are trying some tactic such as the above and are well aware that their argument cannot withstand such scrutiny. It's a great time-saver.

For what it's worth, I was never very impressed with Rand's attempt to derive her 'Objectivist' philosophy axiomatically - it was a distinctly amateur effort. (As were the dozens of previous attempts by virtually every other classical philosopher.) However, that's not the part that her fans are interested in - it was her political and moral theories that people liked. So if all you're objecting to is her claim to have based it all on 'objective reality', save your breath. I already agree, and it doesn't affect the parts that her fans regard as her significant work.

January 25, 2015 | Unregistered CommenterNiV

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>