follow CCP

Recent blog entries
« The Liberal Republic of Science, part 2: What is it?! | Main | Science communication & judicial-neutrality communication look the same to me »
Sunday
Nov182012

The Liberal Republic of Science, part 1: the concept of “political regime”  

I sometimes refer to the Liberal Republic of Science, and a thoughtful person has asked me to explain just what it is I’m talking about.  So I will.

But I want to preface my account—which actually will unfold over the course of several posts—with a brief discussion of the sort of explanation I will give.

One of the useful analytical devices one can find in classical political philosophy  is the concept of “political regimes.” "Political regimes” as used there doesn't refer to identifiable ruling groups within particular nations (“the Ceausescu regime,” etc.)—the contemporary connotation of this phrase—but rather to distinctive forms of government.

Moreover, unlike classification schemes used in contemporary political science, the classical notion of  “political regimes” doesn’t simply map such forms of government onto signature institutions (“democracy = majority rule”; “communism = state ownership of property,” etc.). Instead, it explicates such forms with respect to foundational ideas and commitments, which are understood to animate social and political life—determining, certainly, how sovereign power is allocated across institutions, but also deeply pervading all manner of political and even social and private life.

If one uses this classification strategy, then, one doesn’t try to define forms of government with reference to some set of necessary and sufficient characteristics. Rather one interprets them by elaborating how their most conspicuous features manifest their animating principle, and also how their animating principle makes sense of seemingly peripheral and disparate, or maybe in fact very salient and connected but otherwise puzzling, elements of them.

In addition, while one can classify political regimes in seemingly general, ahistorical terms—as, say, Aristotle did in discussing the moderate vs. the immoderate species of “democracy,” “aristocracy” vs. “oligarchy,” and “monarchy” vs. “tyranny”—the concept can be used too to explicate the way of political life distinctive of a particular historical or contemporary society. Tocqueville, I’d say, furnished these sorts of accounts of the American political regime in Democracy in America and the French one prior to the French Revolution in L’ancien Régime, although he admittedly saw both as instances of general types (“democracy,” in the former case, “aristocracy” in the latter).

For another, complementary account of the “American political regime,” I’d highly recommend Harry Jaffa’s Crisis of the House Divided: An Interpretation of the Lincoln-Douglas Debates (1959). Jaffa was joining issue with other historians, who at the time were converging on a view of Lincoln as a zealot for opposing the pragmatic Stephen Douglas, who these historians believed could have steered the U.S. clear of the Civil War.  Jaffa depicts Lincoln as motivated to preserve the Union as a political regime defined by an imperfectly realized principle of equality. Because Lincoln saw any extension of slavery into the Northwest Territories as incompatible with the American political regime's animating principle, he viewed Douglas’s compromise of  “popular sovereignty” as itself destructive of the Union.

So what is the Liberal Republic of Science?  It’s a political regime, the animating principle of which is the mutually supportive relationship of  political liberalism and scientific inquiry, or of the Open Society and the Logic of Scientific Discovery.

Elaboration of that idea will be the focus of part 2 of this series.

The distinctive challenge that the Liberal Republic of Science faces—one that stems from an paradox intrinsic to its animating principle—will be the subject of part 3.

And the necessary role that the science of science communication plays in negotiating that challenge will be the theme of part 4.

So long!

References

Aristotle (1958). The politics of Aristotle (E. Barker, Trans.). New York,: Oxford University Press. 

Jaffa, H. V. (1959). Crisis of the house divided; an interpretation of the issues in the Lincoln-Douglas debates (1st ed.). Garden City, N.Y.,: Doubleday.

Tocqueville, A. de (1969). Democracy in America (G. Lawrence, Trans.; J.P. Mayer, ed.). Garden City, N.Y.,: Doubleday.

Tocqueville, A. de (2011). Tocqueville : The Ancien Régime and the French Revolution (J. Elster & A. Goldhammer, Trans.). New York, NY: Cambridge University Press.

Nos. Two, Three & Four in this series.

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (15)

Your "complimentary" = "complementary," yes? I know this point is picayune, but ... sometimes I can't help myself.

N.B. I sense Dan Kahan is enamored of paradox, of (real or imagined) internal contradictions of thought, as it were. (I am not prescient: I confess I have read Installment No. 3. There Popper eats himself, he self-destructs, he falls on his own sword, the internal contradictions of his thought lay him & his ideal society low.) If so, I wonder how long Dan K can play his game before he drives himself into a corner and feels forced to fall into a Wittgenstein-like silence.

November 21, 2012 | Unregistered CommenterPeter Tillers

@Peter: Popper cannot be exploded. He is indestructible. Notwithstanding how Popper feels about Hegel, there's a dialectical happy ending to the story. Or better, there's no historical necessity that can't be avoided w/ some good science.
Thanks for spellling correction!

November 21, 2012 | Registered CommenterDan Kahan

Can I be presumptuous enough to suggest that the beginning of an escape from paradox - either for science or for an ideal society - is to recognize that not everything is really believed to be or should be believed to be open to refutation or serious question? For example, no serious physicist really believes that F = MA is open to refutation -- though he, she, or it believes that F = MA can be (and has been) put within a deeper, or broader, theoretical framework (e.g., special relativity, I think). Scientists do have to work with some (relatively) fixed points, they need (and have) some (many) strong theoretical anchors. (So the notion in Daubert that science is only a methodology is quite wrong.) Probably some analogous principle -- the need for some belief-anchors -- applies in the social and political realm as well.

I strongly suspect that you've already said (somewhere) what I've just said above. What I've said here is practically a cliche [some sort of accent over the "e," I think].

November 21, 2012 | Unregistered CommenterPeter Tillers

@Peter: okay but how would you disprove what you just said?

November 21, 2012 | Unregistered Commenterdmk38

I would look for a serious physicist who thinks F = MA is invalid.

Yes, I know: I used the weasel word "serious." I suppose this shows I think some judgments are now beyond proof or disproof. Just as I believe F = MA is beyond question.

November 21, 2012 | Unregistered CommenterPeter Tillers

Actually, every serious physicist does believe F=ma is open to refutation, although it has so far not been refuted. Although in this case part of the reason for that is a subtle ambiguity about what the equation actually means - there is a sense in which it is circularly defined, since F=ma only applies in an inertial coordinate system, and inertial coordinate systems are defined (roughly) as those in which F=ma applies.

Consider what the equation means. Force equals mass times acceleration. And acceleration is the rate of change of velocity. But we all know that velocity depends on your reference frame. If you are sat in a moving train carriage, and you throw a ball from one hand to the other, do you define the velocity of the ball with respect to the train cariage, or with respect to the platform? Do we need to remember that the Earth is spinning at up to a thousand miles an hour? The coordinate system we happen to pick isn't something real; it's only an mental construct in our heads. In the equation F=ma, what is the acceleration measured with respect to?

Now if the train is moving at constant velocity, the acceleration works out the same either way (although it's still a bit weird being the rate of change of an undefined quantity). But if the train speeds up or turns a corner, then the acceleration is affected too and in the train carriage reference frame the force your hand applies is suddenly not the mass times the acceleration. There is an essential difference between the train and the platform, where we say that the train carriage is not an inertial frame of reference. But what does this mean? What is Newton's acceleration being defined with respect to? Empty space? Some sort of weird 'aether' that permeates it? The distant stars? Do we even understand what this quantity 'acceleration' actually is?

We can fix up the problem by introducing 'ficitious' forces, like centrifugal force. But if we do that, then do we understand what the concept 'force' really means? Is centrifugal force a 'force' or isn't it?

I'll give you another example, involving Newton's third law: consider the gravitational forces between the Earth and the sun, which are 8 light minutes apart. The Earth is pulled towards the sun and the sun is pulled towards the Earth, but is it pulled towards where the Earth is now, or towards where it was 8 minutes ago?

If it is pulled towards where the Earth was 8 minutes ago, the forces don't balance. They point in slightly different directions, causing an unbalanced torque or 'twisting' to be applied to the Earth-sun system. The error is about one part in 65,000 (8 minutes divided by one year), and it means that on timescales on the order of 100,000 years the Earth's orbit would be unstable.

However, the alternative is that the sun is pulled towards where the Earth is now, rather than where it was 8 minutes ago. That is what Newton deduced and used. But how does it know, if signals cannot travel faster than light? Suppose you altered the Earth's orbit radically in less than a minute, would the sun still be pulled towards where it thought the Earth ought to be? How could all the forces possibly balance if it does?

The very best physicists can and do reconsider even the fundamentals, and by doing so have obtained some of our deepest insights into physics. Consideration of the reference frame problem led to Lagrangian and Hamiltonian dynamics, and also to general relativity. Consideration of the balance of forces in relativity implies the existence of gravitational radiation, and leads on to the gauge-symmetry picture of physics.

Physical theories are models and as the statistician George Box said "All models are wrong, but some are useful." But to be useful, you need to know the limits of the model's validity, you need to know where the problems and inconsistencies and uncertainties are, because only then can you apply the models safely. Only then can you hope to discover what lies beyond.

But even for the layman, it is essential to know that even the fundamentals are still open to challenge. They've survived so far (with adjustments), but there's no guarantee they'll survive the next challenge. This is fundamental to understanding what science is. It's not a set of facts and methods to be memorised, that scientists have discovered and proved to be true, and that you have to regurgitate for the exam. Science is a way of looking at the world that questions and tests everything. It's a mental toolbox of methods for making it harder to fool yourself. And you don't have to be a professional scientist to make use of it.

And as a big example, science teaches us that one of the easiest ways to fool ourselves is to accept traditional authority and common knowledge without careful consideration. People think it's "obvious", or "well-established". They figure it must have been checked already, and so don't bother to check themselves. Usually it has, but sometimes it hasn't, and the problem isn't picked up because everybody assumes that somebody else must have checked it, and sometimes there are subtleties and limitations that the expert has glossed over, as we saw above. As Feynman said: "Science is the belief in the ignorance of experts".

It's a paradox: scientists can trust textbook science without checking only because they know scientists don't trust textbook science without checking. They can skip the checks themselves only because they are confident that many other scientists didn't. Trusting scientists sort of works, but you can't make it a general rule and be consistent. It's practically necessary sometimes, but it's fundamentally not scientific.

December 2, 2012 | Unregistered CommenterNiV

@NiV: Compar what you are saying, particularly in last paragraph, to discussions we've had about nullius in verba. I detect movement -- on your part, toward my view. But there's still a good way to go!

December 2, 2012 | Registered CommenterDan Kahan

I thought it was pretty much exactly what I had said before? To quote:

"It's quite true that from a practical point of view, the requirements of science cannot be achieved."

and

"It's a paradox - it's trustworthy only because it is not trusted. Scientists can trust authorities precisely because they know that scientists don't. Essentially, they trust other scientist to do what they have chosen not to. They base their conclusions on the belief that others will have done what they consider to be impractical. It violates Kant's formula of the categorical imperative: "Act only according to that maxim whereby you can, at the same time, will that it should become a universal law." Authority works for those who use it only because of the actions of those who don't."

It's not very original of me, but I liked the parallel with the mention of paradox in the first comment above, and it's a nice point from the earlier discussion that I thought was worth reviving. I didn't think either of us had moved the other, but thought that it might stimulate a discussion of the differences.

I don't think our positions are at all far apart regarding the effect. I agree that non-scientists do have various means to certify what is currently fashionable scientific opinion, that there is a lot of ad verecundiam regarding scientists in society, and that this gets infected with external partisan influences that bias thinking and polarise. I agree that both sides approve of science, and think they're the ones being scientific. My diagnosis is slightly different - I think it is a consequence of teaching science-by-authority that people are made vulnerable in their opinions to other sources of authority - and my solution would have a somewhat different emphasis - I agree on the need to improve science communication, but not as a way to make the existing scientific authorities more credible, but by giving people the mental tools to actually do some science, and to better understand its limits and capabilities.

We developed the scientific method precisely to do this sort of thing, as our best available method of finding out what's right. We ought to teach people how to use it. It's a challenge! But there really is no alternative to science that has the same capbilities.

I think one other difference might be that I don't see the current lack of a single authoritative institution or system for certifying what is known as a flaw in need of perfection. I think that's exactly how it needs to be. Science works like evolution by natural selection, with all contenders competing chaotically, trying to defeat one another. The strongest survive. It's not so well done by a central committee picking winners and losers - what you might call the 'intelligent design' approach.

The lions keep trying to eat the gazelles, and never stop. No committee comes along and tells the lions that the debate is over, the science is settled, they've decided the gazelles are the faster and more agile so they'd might as well give up and go home. And it's precisely because they don't that the gazelles stay fast and graceful. The system requires continual conflict to work. The alternative is like a protected island paradise with no predators - which sounds ideal, until you realise you've taken the path of the Dodo.

Things have gone somewhat wrong in the current situation, in that the external partisan influences are keeping certain hypotheses alive long after they've been 'killed off' according to the scientific process. We might disagree about which ones those are, but I think we both agree it's happening.

However, I think the concept of 'truth certifying institutions' is playing to those influences' strengths. Such a system would be a target for takeover by the partisans. Unless you have some ideas for how to stop that happening?

December 2, 2012 | Unregistered CommenterNiV

NiV,

Your comment is very interesting. But I tried to anticipate your line of reasoning by using the clause "though he, she, or it believes that F = MA can be (and has been) put within a deeper, or broader, theoretical framework (e.g., special relativity, I think)." I still do not believe any "serious" physicist believes that further research etc., will invalidate F = MA. This, I think, is true even though in our everyday world matters such as centrifugal force must be taken into account, even though special calculations have to be made and "deeper" (or broader) principles have to be applied in domains where computations go awry if the principles of special relativity (or quantum theory etc.) are not taken into account, even though it is possible there are (or were) other universes in which "our" laws of nature do not hold, and so on. Serious physicists since Newton have used F = MA as a point of departure for further research and analysis, and not as a hypothesis open to refutation.

At its most general level, my point (to Dan Kahan) is that "free inquiry" requires some anchors for inquiry and argument. The idea that one must or can suspend judgment about anything and everything is wrong.

Your friendly non-physicist,

Peter

December 2, 2012 | Unregistered CommenterPeter Tillers

@NiV: Yes, well, the change in your position is very subtle, so I'm not surpised it doesn't leap out at you. The shift will continue in very small, almost indiscernible (actually, indiscernible; except to trained eye of expert) increments until in about, oh, 1 yr's time, you'll be where I am.

I don't believe there should be any single offical system for certification, either. My point -- also for sure not original to me -- is that science wouldn't be possible but for the ability of scientists to recognize and their disposition to accept the word of those who know what they are talking about. In this sense, no difference between scientists and laypeople. Only difference is that scientists are guided by professional conventions of recognition, & laypeople a diverse set of social ones. That does't really matter, though--except when it does. What's distinctive of science's way of knowing is not that it dispenses w/ need for a currency of authority to enable exchange of knowledge; it's that the knowledge that's being exchange is of the form that reflects science's way of figuring things out -- empirically. So the sensibilities of laypeople & of Scientists are both tuned to recognizing those who know what they are talking about in that way.

But in any case, another 18 mos., at most, & this is where you'll be, I'm sure.

December 3, 2012 | Registered CommenterDan Kahan

Dan,

Ah. I clearly don't have the trained eye of an expert.

But I do like scientists who are prepared to make testable predictions. We'll check back in 18 months. :-)

Peter,

What distinction are you making between "invalidate", and "putting into a broader, deeper framework"?

While I don't think any physicist would think it will be irreparably invalidated, it's a matter of principle that it always could be invalidated. How would you prove that it couldn't? Especially given the number of 'near misses' we've seen already with this very rule?

But even if every physicist was totally certain that it was true, they would still act as if it wasn't, for a number of reasons of principle. First, because our only justification for believing in a conclusion relies on us being open to any refutation. If we won't consider counter-arguments, there's no way to tell if the lack of counter-arguments is because there are none or merely because none are allowed. Second, because rehearsing the evidence and arguments strengthens our own understanding, roots out misunderstandings, and leads to deeper insights into the fundamental reasons. And third, because we might be wrong.

It's like having a faulty pocket calculator that sometimes gives wrong answers. How can you tell, if the calculator itself is your only means to check it? The checks you do might be flawed as well. Brains are calculators made of meat, and they never truly believe any of their own beliefs individually are wrong. If they did, they wouldn't be beliefs. But we only have to look at other people, and reflect on the fact that we're human too, to know that many of them are. Thus, as a matter of scientific principle, we never, ever, allow belief, no matter how firm and confident, to overrule evidence. We are always open to changing our minds if sufficient new evidence should come along.

Every generation of scientists thinks most of current scientific knowledge is correct, but if you go back and read what scientists believed a hundred, two hundred, five hundred years ago, it's amazing how much of it turned out to be wrong, often amusingly so. Do we truly believe that only this generation of scientists has at last got it all totally right?

December 3, 2012 | Unregistered CommenterNiV

Dear NIV,

Thanks for your additional comment. I apologize for the repetition of my first reply to you. I must learn to wait, to be patient, before concluding that something went wrong with my attempt to post.

I cannot reply to all the points you make. But I can and will make a couple of general points.

You are plainly a fan of a radical form of free inquiry. You believe in a form of inquiry in which all propositions -- at least "in principle" -- are up for grabs, open to question.

You also believe that this is the way science has to be done. And to prove :-) that that's so you appeal to history: you say we should lok at how often our views of the world have been turned topsy-turvy. And you say that -- at least "in principle" -- all scientific hypotheses are uncertain and must be treated as such.

Ah, you and the real author of _Daubert_, who I think was probably really Charles Fried.

You exaggerate. It is true that many propositions about the world have been turned upside down or have been shattered. But it is not true that all of our beliefs about the world have been turned upside down. For example, if Euclcidean geometry can be viewed as a set of hypotheses about the (some) world, that set of hypotheses still looks pretty sound (in certain "domains," see further discussion below). Furthermore, the ancient Greeks probably asserted that smoke emanating from a campfire usually rises. If such a statement counts as a scientific hypothesis, it still holds pretty good water. And, yes, I repeat, Newton's laws are still valid in an important, if circumscribed, sense.

You suggest that Newton's laws are "in principle" uncertain. But a large part of your argument for that conclusion rests on the hypothesis that no hypothesis (except itself?) can possibly be certain. But this line of reasoning proves too much: it assumes that we can take something as certain only if we can construct a deductively conclusive proof. But this line of argument, I repeat, proves too much. (What it perhaps proves is that we need a better notion of the grounds on which we can take something as certain.)

Yes, F = MA is still valid even though (as I recall) physicists today view F = MA as a special case within, say, relativistic mechanics. But if F = MA is viewed as a "special case," it is (I believe) still generally viewed as a valid special case.

Wittgenstein long ago discussed how specific branches of mathematics (e.g., Euclidean geometry) get taken up into or become part of more comprehensive mathematical theories. W overstated the case, but there was plainly something to what he said. And F = MA (and Newton's other laws) take an analogous position with regard to broader or deeper theories (such as relativistic mechanics) that have come and will come to be accepted as valid. That Newton's laws now occupy such a "subordinate" role -- that they are seen as a special case -- does not mean they are now considered "invalid" or "untrue."

I must rush off to do an important chore. But before I go let me signal the proposition I may try to address next: the fact that we can and should recognize that our seemingly-firmest hypotheses may someday be invalidated does not actually mean that we do or should treat all of our beliefs as provisional or uncertain. If we actually adopted this stance, we could not reason well about the world or do real science. (I will perhaps elaborate this point later.)

Best wishes,

Peter T

December 5, 2012 | Unregistered CommenterPeter Tillers

Peter,

Apologies for the delay in replying. I hadn't checked this thread the last few days.

I agree that I am a fan of a form of inquiry in which all propositions *in principle* are open to question. And part of the argument for that is indeed the number of times progress has been made by overturning propositions previously thought certain and unchallengable. Euclid is, of course, another classic example - where it was long assumed by many to be a sort of intellectual bedrock, until those dissidents sceptical of the parallel axiom discovered non-Euclidean geometry. Ironically, the word 'geometry' means measurement of the Earth, and even the Greeks knew the Earth required spherical geometry which is non-Euclidean.

But my argument wasn't simply about the utilitarian advantages of scientific revolutions. I was arguing that even if a proposition *really is* certain and true, that as a matter of epistemological principle you have to *treat it* as open to question in order that you can truly *know* it to be true.

Our confidence in these venerable principles you cite is so strong *precisely because* we know each generation of scientists *has* questioned them and they have survived every time. If we knew that nobody was questioning them, if we knew they were no better tested than the day after Euclid or Newton first said it, we wouldn't have that confidence. We wouldn't know if they had survived so long because they were right, or merely because the right questions had never been asked.

And the other, no less important, reason for checking is to avoid errors and misunderstandings creeping in to what was previously correct. Humans are fallible, and scientists are human. There are typos and misprints. Sometimes a principle may be misunderstood. Some principles are only applicable in certain circumstances, or involve some subtleties, which will be left out where they are not relevant or to simplify the explanation. Scientific education is incremental, with the first version taught being simple but wrong, and then corrections and elaborations are added to move the student gradually to the complicated but more 'correct' model. And as students drop out of the education system at various points they will come out holding models of varying completeness and accuracy. (Unfortunately, our education system is such that at every stage they are taught to believe them to be true and inviolable, but that's another argument.)

There are many influences tending to corrupt our body of knowledge, and the constant checking and questioning is like Science's 'immune system' for rooting out and cleaning up this damage.

And this protection is most essential around the fundamentals on which everything else rests. We are constantly on guard.

All that said, that does *not* mean that we have to question everything every single time we use it. It's that paradox I mentioned: scientists can trust textbook science without checking only because they know scientists don't trust textbook science without checking. You *sometimes* check. You check if something odd turns up. You check if some new discovery allows you to ask better questions. You may check out of curiosity, or as an exercise. You check to get a better understanding. You check if you get the impression that it hasn't been sufficiently checked previously. But if you're sufficiently confident that all the guards against error are visibly in place, you can often proceed more quickly and with less caution.

You will still sometimes get mugged by reality, but in the safer areas of science it is now mercifully rare.

Scientific theories and principles are models of reality - and we remember that all models are wrong, but some are useful. We pick a model to use, and for the duration of the calculation we assume that it is true. It's the same sort of willing suspension of disbelief we use when reading a fiction novel, or watching a film. You 'get into' the imagined universe and treat it as real. But somewhere in the back of your mind you have to be aware that that's what you're doing. There are no point masses, frictionless planes, ideal gases, rigid bodies, instantaneous reactions, smooth surfaces, monochromatic waves, inviscid fluids, etc. in reality. They're all as fictional as the characters in Aesop's fables. But there's *enough* truth in them to be useful.

December 10, 2012 | Unregistered CommenterNiV

Dear NiV,

We must stop quarreling like this in public. :-)

Newton did not arrive at his "laws" by dogmatic slumbers. Neither did Einstein conjure up special relativity that way. Etc. You have a much too Kuhn-like image of the progress of human knowledge. We actually do know more (about most things) than did, e.g., the ancient Greeks. Human knowledge has in fact actually advanced (about most things).

There are elements of truth in a Whig theory of the history of science and of branches pf knowledge such as medicine. It is, I think, almost meaningless to say that everything is "in principle" open to question. We will advance human knowledge (rather than destroy it) often by taking as valid many hard-won knowledge discoveries. Scientists will in fact take many things for granted, as established, when they do serious science -- even if they might say, if asked, that "of course" all hypotheses are "in principle" open to question. (I repeat: to say that we accept certain principles -- and we stake our careers, and lives, etc. - on many them does not mean we abandon or should abandon the attempt to view them in some broader or deeper theoretical perspective.)

It might of course turn out that everything we believe is false. But if were you, I wouldn't bet on it.

That's my last (undogmatic) word.

December 10, 2012 | Unregistered CommenterPeter Tillers

:-) It reminds me of that famous quote.

"The more important fundamental laws and facts of physical science have all been discovered, and these are now so firmly established that the possibility of their ever being supplanted in consequence of new discoveries is exceedingly remote. Nevertheless, it has been found that there are apparent exceptions to most of these laws, and this is particularly true when the observations are pushed to a limit, i.e., whenever the circumstances of experiment are such that extreme cases can be examined. Such examination almost surely leads, not to the overthrow of the law, but to the discovery of other facts and laws whose action produces the apparent exceptions. As instances of such discoveries, which are in most cases due to the increasing order of accuracy made possible by improvements in measuring instruments, may be mentioned: first, the departure of actual gases from the simple laws of the so-called perfect gas, one of the practical results being the liquefaction of air and all known gases; second, the discovery of the velocity of light by astronomical means, depending on the accuracy of telescopes and of astronomical clocks; third, the determination of distances of stars and the orbits of double stars, which depend on measurements of the order of accuracy of one-tenth of a second-an angle which may be represented as that which a pin's head subtends at a distance of a mile. But perhaps the most striking of such instances are the discovery of a new planet or observations of the small irregularities noticed by Leverrier in the motions of the planet Uranus, and the more recent brilliant discovery by Lord Rayleigh of a new element in the atmosphere through the minute but unexplained anomalies found in weighing a given volume of nitrogen. Many other instances might be cited, but these will suffice to justify the statement that 'our future discoveries must be looked for in the sixth place of decimals'"

A A Michelson, Light Waves and Their Uses (1903).

December 10, 2012 | Unregistered CommenterNiV

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>