Very good and thought-provoking article, Mano. “Preponderance of evidence” seems like a good standard, reminiscent of the legal standard for civil cases. Of course that involves some subjective judgment concerning the point at which “lots of evidence” qualifies as a “preponderance”.
brucegee1962says
Congratulations! Very cool!
However, the article was short, and I’m afraid you lost me here:
But the field known as science studies (comprising the history, philosophy and sociology of science) has shown that falsification cannot work even in principle.
Surely science textbooks are filled with examples of scientific theories that have been disproved and discarded when contrary (or just a lack of confirming) evidence is found. Phlogiston comes immediately to mind, or Lamarckism.
After this sentence you pivot to defending science studies, where you lay down some convincing arguments. But I may just be too dense to pick up on your evidence for the titular thesis.
Mano Singhamsays
DonDueed @#1,
It was very preceptive of you to note the connection with the legal system. In fact, in my book I spend quite a bit of time discussing the similarities between how scientific consensus verdicts are reached with how the legal system operates. There is always subjective judgment involved. That is why one needs to create systems that minimize its influence though one can never completely eliminate it.
Rob Grigjanissays
brucegee1962 @2: There was no ‘eureka’ moment when phlogiston was dumped as a theory. Its proponents just kept making it more complicated until it eventually lost all credibility. So, yeah, some theories end up on the trash heap, but it’s not because of some magical ‘falsifiability’ principle.
Mano Singhamsays
brucegee1962 @#2,
You are quite right that textbooks are full of examples of falsification. Because this is such an important issue, my book discusses in great detail how this folklore arose and why it has so strongly grabbed the imagination of scientists and the general public that it seems to be obviously true when it is not.
Rob Grigjanis’s comment #4 is on point.
grahamjonessays
Congratulations on the article, also on your book, which I have just finished.
I was frustrated by your book, it wasn’t the book I wanted it to be. My philosphy of science in a nutshell is “All models are wrong, but some are useful” (due to the statistician George Box). Of course, if all models are wrong, it is impossible for science to progress via falsification. See https://en.wikipedia.org/wiki/All_models_are_wrong from which I quote:
“All models are wrong” is a common aphorism in statistics; it is often expanded as “All models are wrong, but some are useful”. It is usually considered to be applicable to not only statistical models, but to scientific models generally. The aphorism recognizes that statistical/scientific models always fall short of the complexities of reality but can still be of use.
Now this is going to be a bit unfair, but to express my frustration briefly, your book seemed like a long-winded argument to get to where I wanted it to start.
Note the aphorism dates from the 1970s. Why are scientists 40 years too slow? Or are they? Maybe it is only a problem for physicists, and then perhaps really only some fundamental physicists? I’ve worked in the areas of machine learning and mathematical biology, and I’m pretty sure the scientists I know would not regard their work as a quest for the truth, or that progress in their field proceeds via falsification.
Rob Grigjanissays
grahamjones @6: I think the “problem” is for the public, and some philosophers. Anyone who actually works with models (in cosmology, particle physics, etc) knows perfectly well that those models are ‘wrong’ in the sense of being approximations with varying domains of validity. Not only that, but most calculations are themselves approximations within those models.
Mano Singhamsays
grahamjones @#6,
Thanks for reading the book and the comment!
I am familiar with the aphorism “All models are wrong, but some are useful” and my book is somewhat congruent with that view, but not entirely. The aphorism seems to elide an important question. Saying that all models are ‘wrong’ implies that there is some standard to which a model can be compared (the ‘right’ one) and found wanting. What is that standard?
The question of truth arises because science seems to show inexorable progress, which prompts the question of what (if anything) it is progressing towards. The idea that the goal is truth was openly challenged by the philosophers of science way back in the 1970s (and there were subtle hints even earlier) but that view has not permeated to the general scientific community.
As for falsification, as brucegee1962 said @#2, science textbooks are full of purported examples that seemingly show falsification. During the controversy over the inclusion of intelligence design creationism in science curricula, one of the most common arguments against ID was that it was not falsifiable and hence not science.This was a key element of the Dover case.
Mano Singhamsays
Rob @#7,
My comment above to grahamjones also applies here. When you say that something is an ‘approximation’, it implies that there is a standard of comparison that it is approximate to. What is that standard?
Rob Grigjanissays
Mano @9: ‘approximation’ in this sense does not allude to a standard. It is simply a recognition that models are built on limited data, and that any predictions they make become unreliable outside the boundaries defined by the data.
For example, the Standard Model has at least two built-in limitations; it doesn’t include gravity, and it can only be assumed valid (i.e. give accurate transition amplitudes, etc) up to the energies accessible to us.
DonDueedsays
A further thought: the notion of “preponderance of evidence” seems to be closely related to the concept of “paradigm shift” as described in “The Nature of Scientific Revolutions”. Specifically, it seems that when two significantly different theories (lay definition) are in play, the paradigm shift only occurs when that preponderance of evidence is reached.
How would you characterize the connection between your ideas and Kuhn’s?
Mano Singhamsays
DonDueed @#11,
I am heavily indebted to Kuhn especially for his book The Structure of Scientific Revolutions which was a text I used in a seminar course. There is not much daylight between Kuhn and me, as will be clearly seen in my book.
Mano Singhamsays
Rob @#10,
It is not clear to me why the boundaries of the data is a meaningful marker of when predictions of theories become reliable. Whenever we make a prediction, we are going beyond the data. If we do not have confidence when we extrapolate beyond the extremities of the data, why should we have confidence when we interpolate between data points within that range? In both cases, we are going into regions we have not yet seen.
Rob Grigjanissays
Mano @13:
If we do not have confidence when we extrapolate beyond the extremities of the data, why should we have confidence when we interpolate between data points within that range?
Our confidence regarding interpolation increases as we do more experiments, increasing the number of data points within the range accessible to us. As far as extrapolation goes, it depends on the theory.
If you know the mass and radius of the Earth, you can deduce the distance to the Moon from Newton’s law of universal gravitation by timing the fall of an apple.
Extrapolating in the Standard Model is more dodgy. We can make tentative predictions (via the renormalization group equation) beyond energies accessible to us, if we assume there are no new particles to be encountered at those energies. Currently we can do about 10 TeV collision energy. Does the Standard Model tell us what we’ll see at 20 TeV, or 100 TeV?
Mano Singhamsays
Rob @#14,
There are always other assumptions (I prefer to use the term auxiliary hypotheses) involved, whatever energy range you are at. In the low energy range, you say that we can make predictions “if we assume there are no new particles to be encountered at those energies”. But we can also make predictions at the higher energy range if we similarly assume that no new phenomena appear.
Whenever we make any prediction, we are always going into uncharted territory and are assuming that no effect that we do not already know about will play a significant role.
The increasing confidence with the number of data points is a probabilistic argument that people like Karl Popper and Rudolf Carnap tried to quantify but failed, and they came to the conclusion that probabilistic notions were of no help in evaluating the relative reliability of theories. Again, these are issues that I discuss in some detail in my book.
friedfish2718says
The piece is in the opinion section. And opinions are a dime a dozen. Opinions are not peer-reviewed.
.
“… for him to lose faith in the theory of evolution …”
.
Ah! Issue of faith in SCIENCE. Is SCIENCE the anti-religion?
.
Evolution has not been proven by classical means: the transition from 1 species to another has not been replicated and has not been observed directly. Evolution has only been inferred by the fossil records. Strictly speaking, one should call it the Evolution Hypothesis. As a bettor, I place my money that Evolution Hypothesis will become the Evolution Theorem.
.
” When a “theoretical” prediction disagrees with “experimental” data, what this tells us is that that there is a disagreement between two sets of theories, so we cannot say that any particular theory is falsified.”
.
Here, a quote of Wolfgang Pauli is appropriate: “That’s not right. That’s not even wrong.”
.
Mr Singham is not right; he is not even wrong.
.
Newtonian physics was not falsified; it was shown -- by experiment -- to be Not Quite True.
Special Relativity was not falsified; it was shown -- by experiment -- to be Not Quite True.
.
Mr Singham is using strawman argumentation.
.
A proper use and presentation of a theory is within the set of appropriate approximate assumptions. Outside said set of assumptions, the theory is either False/Not Quite True/Irrelevant; the theory may be relevant and Not Quite True.
.
“… What they need to do is produce a preponderance of evidence in support of their case, and they have not done so.”
.
Incorrect. Is the glass half-full? Is the glass half-empty? Same evidence, same observations, different conclusions. The non-CACA (Catastrophic Anthropogenic Climate Alteration) advocates are looking at the same data as the CACA advocates. And the various groups that Mr Singham opposed have presented evidence to support their theories. Mr Singham does not have the courage to examine their evidence. Not willing to see is worse than being actually blind. Either Mr Singham is a coward or is just a lazy and exhausted guy.
.
According to Mr Singham, theories cannot be falsified so how can he falsify the theories of those he opposed (primarily on political and ideological grounds)? He is in a logical bind.
.
If we make contact with extraterrestials, would their Science be identical to ours? Stephen Wolfram said: “Probably not”.
John Moralessays
FFe:
The piece is in the opinion section. And opinions are a dime a dozen. Opinions are not peer-reviewed.
Sure.
But an opinion by an electrician about electrical stuff is worth more than that of a non-electrician.
And an opinion by a carpenter about woodworking stuff, similarly so.
And, in this case, an opinion by an emeritus physics professor is similarly relatively more worthy.
Mr Singham is using strawman argumentation.
Either it’s an opinion or it’s an argument — can’t have it both ways.
(heh)
Sunday Afternoonsays
@Mano, #9:
When you say that something is an ‘approximation’, it implies that there is a standard of comparison that it is approximate to. What is that standard?
Is this a trick question? Surely reality is what we’re trying to approximate? Understanding reality is of course somewhat tricky.
Personally, I really dislike the “all models are wrong, but some are useful” aphorism. I worry that it gives ammunition to the anti-science mendacious malevolence that has far too much influence in American life, viz: “So you admit that science is wrong!”
I have glibly suggested in other correspondence what I consider an improvement: “all models are wrong, but some are pretty fucking amazing!” Examples include, but are not limited to (physics background showing): the Standard Model, General Relativity, electromagnetism, micromagnetics, evolutionary theory, meteorology, etc.
I really should buy Mano’s book…
Mano Singhamsays
Sunday Afternoon @#18,
No, it was not a trick question. As you say, ‘reality’ is a tricky concept and there are deep discussions as to whether there is such a thing as an objective reality.
While that is an interesting question, it would have taken me too far afield to discuss in my book so I took it as a given that there is an objective reality. But when we talk about ‘approximations’, what we are referring to is not a comparison to the reality but to a representation of that reality. What I do question is whether there is a unique representation of that reality. It is only then that we can suggest that our theories are ‘approximations’ to it.
Please do buy the book! I think you will enjoy it.
consciousness razorsays
But when we talk about ‘approximations’, what we are referring to is not a comparison to the reality but to a representation of that reality. What I do question is whether there is a unique representation of that reality. It is only then that we can suggest that our theories are ‘approximations’ to it.
Only then? I don’t get that. (Also, I can never make any sense of the “no objective reality” business, but I’m not touching that for now.)
What’s supposed to be the problem with multiple such representations (of a single reality)? I don’t think there’s a need for uniqueness. You’d just need to pick one out of however many there may be … or out of a subset of convenient choices that human beings may be able to comprehend, for instance. Presumably, it would be in mathematical/physical terms. Then you can talk about approximations of the one you picked (but not the others). It doesn’t seem like there’s anything contradictory about any of that.
Rob Grigjanissays
Mano @15:
In the low energy range, you say that we can make predictions “if we assume there are no new particles to be encountered at those energies”.
No, I was referring there to energies higher than we can currently access -- “beyond energies accessible to us”.
But the main issue seems to be your dislike of the word ‘approximation’. I don’t get that. The Standard Model is an effective low energy theory which is not expected to give accurate predictions at arbitrarily large energies. Just because we don’t have a higher-energy theory to compare to the SM, or even a solid upper limit for the SM’s validity, doesn’t make it any less an approximation.
@9:
When you say that something is an ‘approximation’, it implies that there is a standard of comparison that it is approximate to.
Right. In the case of the SM, the standard is an as-yet unknown higher-energy theory.
Mano Singhamsays
consciousness razor @#20,
If there can be multiple representations of reality and there is no ‘correct’ one, then what you create with your theories is just a representation and it does not make sense to call it an approximation. What is it an approximation to?
As soon as you start talking about right and wrong and approximations, you are, at least implicitly, saying that there is a standard to which you can compare and make judgments.
As an example, say I decide to draw an animal. The problem is that there are many different animals. The closer my drawing approximates a dog (say) the farther it gets from being a snake. So it would not make sense to call my drawing an approximation of a generic animal. But at least in that case we have some knowledge of animals in general.
Now take it farther and assume that I am drawing a picture of an extraterrestrial being. In what sense can I claim that it is an approximation of an actual extraterrestrial? I can make my drawing more and more detailed but would it be an approximation in the absence of any idea of what an extraterrestrial looks like?
Mano Singhamsays
Rob @#21,
(This response builds on my response to consciousness razor @# 22.)
When you write that “the standard is an as-yet unknown higher-energy theory’, you seem to be acknowledging that there is a single, correct, higher energy theory and that other theories can be compared to it. My claim is that we do not know if such a unique theory exists (it may or may not) and in the absence of that knowledge, it does not make sense to make such a comparison.
There is nothing special about the energy frontier. New phenomena that upends conventional wisdom can occur anywhere, as for example the ones that a led to the oxygen theory of combustion.
What we have are theories that make predictions. Whenever we make a prediction, we are entering uncharted territory that may cross the energy frontier or some other frontier. We may encounter phenomena that require us to invent new theories. These new theories will be different from the old but what I am challenging is the claim that the new theory is a closer approximation to some unknown and unknowable standard.
consciousness razorsays
If there can be multiple representations of reality and there is no ‘correct’ one, then what you create with your theories is just a representation and it does not make sense to call it an approximation. What is it an approximation to?
No, I was talking about multiple possible representations, and all of them accurately/correctly/truthfully represent the world. It’s not that they’re all wrong, that no such things exist, or what have you.
Being different as representations doesn’t imply that they would contradict one another or invalidate one another. Anything we might come up with (different forms of math, languages, images, etc.) would be sufficient, as long as it can give an accurate representation of the world and not something false or inaccurate. There doesn’t have to be something like “the” accurate one, because there’s nothing impossible about there being many such things.
Imagine that there’s an alien species which comes up with something different from what humans would. Their sensory/cognitive abilities may be different, as well as their history and culture. Lots of accidents along the way may have shaped their approach to the types of math, sciences, artforms, and so forth that they will use, which means they’d represent this same world in very different terms. That’s okay. Such differences don’t imply that either or both of us are wrong about it.
And indeed, it shouldn’t surprise us at all if there are some pretty major differences like this between humans and an alien species. The really surprising thing would be if they did all look, act, think, and represent things just like we do.
As soon as you start talking about right and wrong and approximations, you are, at least implicitly, saying that there is a standard to which you can compare and make judgments.
Yes, and that standard should be reality itself. It’s not like we’re lacking a reality to speak of, so we have no problem there.
We represent things with numbers, among other things. We can’t write down the precise value of pi, so we use things like 3.14 instead. The latter is an example of an approximation. It’s not pi, which is also a number and thus (as a number) a representation. Unlike pi, it is a rational number, 314/100, which is an approximation of the actual value. We have lots more choices than the two I already wrote, of course: 157/50 or 942/300, for example. Or you could use something more precise like 3.14159, use various other formulas for your calculations, you may opt to represent things in terms of tau (2pi), and so forth.
Options like that are always on the table, and the existence of more than one doesn’t invalidate them all or lead to a contradiction of some sort. To get a little less abstract, the topic here is about natural laws, and I’m pretty happy with the basic Humean view. We make laws to describe and explain the world in a way which will be useful and comprehensible for us (not necessarily anyone else). And those are not something extra in the world, in addition to the set of all of the particular facts (this atom here, that atom there, moving this way and that way, etc. ad nauseum).
They just give us a picture of the world, which we need in order to do the things we’d like to do with them: describe things accurately, make predictions, explain stuff so that we’re satisfied with our understanding of it, make new technologies, develop more and better theories than what we currently have, etc. If there are others which would also do the job (if not for us, then perhaps for somebody else), I think that’s perfectly okay.
njd3603says
I understand the difficulties with falsifying a theory, but do you feel differently about hypotheses? Hypotheses get tested, implying that the test can be failed and the hypothesis rejected. A hypothesis that can’t be tested, even in principle, might not be so useful.
grahamjonessays
Mano at 8:
I am familiar with the aphorism “All models are wrong, but some are useful” and my book is somewhat congruent with that view, but not entirely. The aphorism seems to elide an important question. Saying that all models are ‘wrong’ implies that there is some standard to which a model can be compared (the ‘right’ one) and found wanting. What is that standard?
I feel baffled by this. Reading other replies you have given (drawing extraterrestials??) made it worse not better.
OK, now I have slept on it, perhaps things are clearer. I am no philosopher, but loosely, I subscribe to instrumentalism:
a pragmatic philosophical approach which regards an activity (such as science, law, or education) chiefly as an instrument or tool for some practical purpose, rather than in more absolute or ideal terms.
There is no standard for rightness or wrongness. You are looking at the wrong word in “All models are wrong, but some are useful”. You should be looking at “useful”.
An example. Two models A and B for the atmosphere and oceans lead to somewhat different weather forecasts. Model A is more accurate for wind velocity at sea, model B is better at precipitation over land. Fishermen and farmers disagree about which is most useful. The government will fund a fancy new supercomputer for one model but not two. A decision must be made.
Statistical decision theory requires that the person making the decision supplies a utility function, which is basically a numerical way to capture “usefulness”. Like a Bayesian prior, a utility function is subjective. (I don’t know if George Box chose “useful” with this sort of notion in mind, but it seems an obvious link for a statistician to make.)
I don’t like “successful predictions” because it sounds like an attempt to hang on to objectivity once truth is relinquished as a goal. It sounds like we should all be able to see whether success has been achieved and agree one way or the other. The usefulness of a model is subjective, and I think that failing to admit this is trying to hold on to something which is ultimately indefensible. The descriptions in your book involving “preponderance of evidence” and sort-of legal arguments involving “scientific logic” and “assertions of a positive nature” and so on, will not choose between models A and B. You can’t keep subjectivity or politics out.
Mano Singhamsays
njd3603 @25,
There is no fundamental difference between hypotheses and theories in this context. Both can be tested in the sense that they each make predictions and one can see if the results of the tests line up with expectations. Neither can be falsified in the sense most commonly used where a single wrong prediction is decisive but they can, using the weight evidence arrived at from numerous tests, be judged by credible experts to be not worth pursuing. Whether one calls this result ‘falsifying’ or not is a matter of taste.
Mano Singhamsays
graham jones @#26,
If the aphorism “All models are wrong, but some are useful” is replaced by “Different models have differing degrees of usefulness with respect to specified purposes”, then I would have no problem with it. That seems to be what you are saying. But then I then the aphorism becomes somewhat banal.
I think that it is the word ‘wrong’ that gives the original aphorism its punch but it then creates a misleading impression. We do not know if there is a ‘right’ model out there or not waiting to be discovered but if there is, then how can one make the claim that all models are wrong? One may have stumbled upon the right one.
njd3603says
mano singham @27,
One difference between a hypothesis and a theory is that the former can be very simple -- for example, “The resistance of a piece of nichrome wire (of given diameter and length) does not change as the voltage across it is increased (from 2V to 10V)”. This can be tested, and falsified. (I’m a school physics teacher -- the hypotheses that my students test tend to be simple!) It’s true that students often describe their results by saying that “the resistance is fairly constant / doesn’t change significantly” because that’s what they think the results “should be”, but once everyone in the class has got the same result they are willing to agree that the resistance of the wire isn’t constant in this case.
I’m sure you don’t disagree with any of this -- I’m just saying that, especially in teaching, there are plenty of hypotheses that students really can falsify, and at this level I don’t think anything is to be gained by making things more complicated. I think you might disagree with me here; if so, it would be interesting to know how you think the idea of testing hypotheses should be presented in school.
Mano Singhamsays
njd3603 @#29,
I actually discuss in my book the teaching of physics in schools. I analyze the example of F=ma in some detail instead of your choice of R=V/I staying constant even as V is changed but the idea is the same.
As you know, the value of R never stays the same as you change V. The first important step in increasing the sophistication of students at that level is to introduce the idea of experimental uncertainty. All measurements have some uncertainty associated with them and this leads to a range of values for the calculated quantity, which in your case is R. So the first thing to look at is if the values of R for different V are consistent within the range of uncertainty when we take the uncertainties of V and I into account. (I discuss how to do this for the F=ma case.) If they are consistent, then we can say that the evidence is not challenging the law. If they are not consistent (and this often happens in school science experiments), then that is evidence against the law being correct. That is a good cue to discuss with students that we cannot just claim that a law is false based on one or two discrepant results because that does not constitute a preponderance of evidence and that we have to live with the anomalies.
The idea of doing science experiments should be to teach students how to generate and evaluate evidence. Teaching this way is not hard to do but necessary if we are to give students a better understanding of the nature of science.
grahamjonessays
Mano, you said:
If the aphorism “All models are wrong, but some are useful” is replaced by “Different models have differing degrees of usefulness with respect to specified purposes”, then I would have no problem with it.
And if we keep replacing less useful models with more useful models, surely the result will be inexorable progress?
If your book said that, I would have no problem with it!
Mano Singhamsays
grahamjones @#31,
While it is commonly asserted that science does show progress, does that imply that it is getting objectively better in some sense?
What ‘progress’ means and how it might be measured, if it can be measured at all, has no easy answer and thus is discussed in some detail in the book.
I’m sorry to keep referring to the book but the book consists of an extended and detailed argument that weaves together many different threads of the nature of science and it is not easy explain convincingly in a paragraph or two.
rblackadarsays
@#29,
In hypothesis testing, you always have to look out for confounders, and unfortunately you can’t know if you’ve considered all of them. In the case of the high school physics test of Ohm’s Law with NiCr wire, of course the relevant confounder is temperature. (As I’m sure you know — indeed I assume it’s the whole point of the exercise.)
Do your students conclude that Ohm’s Law has been falsified and therefore should be rejected? Or is it rather that they need to know when it applies vs. when something additional is needed. Actually, this is a great example of Mano’s point, that every observation (even a simple one like this) is theory-laden.
@#16,
Regarding Special Relativity, it was known from the very beginning that it would not play nice with gravity; Einstein’s decade-long quest for something better (GR) was driven by theoretical concerns, not so much experiment. (Well, perhaps other than the well-known observation that the earth has not spiraled into the sun over 4.5 billion years.) When he applied GR to the perihelion of Mercury, that was an early success of the theory but AFAICT not something that motivated its initial development.
DonDueed says
Very good and thought-provoking article, Mano. “Preponderance of evidence” seems like a good standard, reminiscent of the legal standard for civil cases. Of course that involves some subjective judgment concerning the point at which “lots of evidence” qualifies as a “preponderance”.
brucegee1962 says
Congratulations! Very cool!
However, the article was short, and I’m afraid you lost me here:
But the field known as science studies (comprising the history, philosophy and sociology of science) has shown that falsification cannot work even in principle.
Surely science textbooks are filled with examples of scientific theories that have been disproved and discarded when contrary (or just a lack of confirming) evidence is found. Phlogiston comes immediately to mind, or Lamarckism.
After this sentence you pivot to defending science studies, where you lay down some convincing arguments. But I may just be too dense to pick up on your evidence for the titular thesis.
Mano Singham says
DonDueed @#1,
It was very preceptive of you to note the connection with the legal system. In fact, in my book I spend quite a bit of time discussing the similarities between how scientific consensus verdicts are reached with how the legal system operates. There is always subjective judgment involved. That is why one needs to create systems that minimize its influence though one can never completely eliminate it.
Rob Grigjanis says
brucegee1962 @2: There was no ‘eureka’ moment when phlogiston was dumped as a theory. Its proponents just kept making it more complicated until it eventually lost all credibility. So, yeah, some theories end up on the trash heap, but it’s not because of some magical ‘falsifiability’ principle.
Mano Singham says
brucegee1962 @#2,
You are quite right that textbooks are full of examples of falsification. Because this is such an important issue, my book discusses in great detail how this folklore arose and why it has so strongly grabbed the imagination of scientists and the general public that it seems to be obviously true when it is not.
Rob Grigjanis’s comment #4 is on point.
grahamjones says
Congratulations on the article, also on your book, which I have just finished.
I was frustrated by your book, it wasn’t the book I wanted it to be. My philosphy of science in a nutshell is “All models are wrong, but some are useful” (due to the statistician George Box). Of course, if all models are wrong, it is impossible for science to progress via falsification. See https://en.wikipedia.org/wiki/All_models_are_wrong from which I quote:
Now this is going to be a bit unfair, but to express my frustration briefly, your book seemed like a long-winded argument to get to where I wanted it to start.
Note the aphorism dates from the 1970s. Why are scientists 40 years too slow? Or are they? Maybe it is only a problem for physicists, and then perhaps really only some fundamental physicists? I’ve worked in the areas of machine learning and mathematical biology, and I’m pretty sure the scientists I know would not regard their work as a quest for the truth, or that progress in their field proceeds via falsification.
Rob Grigjanis says
grahamjones @6: I think the “problem” is for the public, and some philosophers. Anyone who actually works with models (in cosmology, particle physics, etc) knows perfectly well that those models are ‘wrong’ in the sense of being approximations with varying domains of validity. Not only that, but most calculations are themselves approximations within those models.
Mano Singham says
grahamjones @#6,
Thanks for reading the book and the comment!
I am familiar with the aphorism “All models are wrong, but some are useful” and my book is somewhat congruent with that view, but not entirely. The aphorism seems to elide an important question. Saying that all models are ‘wrong’ implies that there is some standard to which a model can be compared (the ‘right’ one) and found wanting. What is that standard?
The question of truth arises because science seems to show inexorable progress, which prompts the question of what (if anything) it is progressing towards. The idea that the goal is truth was openly challenged by the philosophers of science way back in the 1970s (and there were subtle hints even earlier) but that view has not permeated to the general scientific community.
As for falsification, as brucegee1962 said @#2, science textbooks are full of purported examples that seemingly show falsification. During the controversy over the inclusion of intelligence design creationism in science curricula, one of the most common arguments against ID was that it was not falsifiable and hence not science.This was a key element of the Dover case.
Mano Singham says
Rob @#7,
My comment above to grahamjones also applies here. When you say that something is an ‘approximation’, it implies that there is a standard of comparison that it is approximate to. What is that standard?
Rob Grigjanis says
Mano @9: ‘approximation’ in this sense does not allude to a standard. It is simply a recognition that models are built on limited data, and that any predictions they make become unreliable outside the boundaries defined by the data.
For example, the Standard Model has at least two built-in limitations; it doesn’t include gravity, and it can only be assumed valid (i.e. give accurate transition amplitudes, etc) up to the energies accessible to us.
DonDueed says
A further thought: the notion of “preponderance of evidence” seems to be closely related to the concept of “paradigm shift” as described in “The Nature of Scientific Revolutions”. Specifically, it seems that when two significantly different theories (lay definition) are in play, the paradigm shift only occurs when that preponderance of evidence is reached.
How would you characterize the connection between your ideas and Kuhn’s?
Mano Singham says
DonDueed @#11,
I am heavily indebted to Kuhn especially for his book The Structure of Scientific Revolutions which was a text I used in a seminar course. There is not much daylight between Kuhn and me, as will be clearly seen in my book.
Mano Singham says
Rob @#10,
It is not clear to me why the boundaries of the data is a meaningful marker of when predictions of theories become reliable. Whenever we make a prediction, we are going beyond the data. If we do not have confidence when we extrapolate beyond the extremities of the data, why should we have confidence when we interpolate between data points within that range? In both cases, we are going into regions we have not yet seen.
Rob Grigjanis says
Mano @13:
Our confidence regarding interpolation increases as we do more experiments, increasing the number of data points within the range accessible to us. As far as extrapolation goes, it depends on the theory.
If you know the mass and radius of the Earth, you can deduce the distance to the Moon from Newton’s law of universal gravitation by timing the fall of an apple.
Extrapolating in the Standard Model is more dodgy. We can make tentative predictions (via the renormalization group equation) beyond energies accessible to us, if we assume there are no new particles to be encountered at those energies. Currently we can do about 10 TeV collision energy. Does the Standard Model tell us what we’ll see at 20 TeV, or 100 TeV?
Mano Singham says
Rob @#14,
There are always other assumptions (I prefer to use the term auxiliary hypotheses) involved, whatever energy range you are at. In the low energy range, you say that we can make predictions “if we assume there are no new particles to be encountered at those energies”. But we can also make predictions at the higher energy range if we similarly assume that no new phenomena appear.
Whenever we make any prediction, we are always going into uncharted territory and are assuming that no effect that we do not already know about will play a significant role.
The increasing confidence with the number of data points is a probabilistic argument that people like Karl Popper and Rudolf Carnap tried to quantify but failed, and they came to the conclusion that probabilistic notions were of no help in evaluating the relative reliability of theories. Again, these are issues that I discuss in some detail in my book.
friedfish2718 says
The piece is in the opinion section. And opinions are a dime a dozen. Opinions are not peer-reviewed.
.
“… for him to lose faith in the theory of evolution …”
.
Ah! Issue of faith in SCIENCE. Is SCIENCE the anti-religion?
.
Evolution has not been proven by classical means: the transition from 1 species to another has not been replicated and has not been observed directly. Evolution has only been inferred by the fossil records. Strictly speaking, one should call it the Evolution Hypothesis. As a bettor, I place my money that Evolution Hypothesis will become the Evolution Theorem.
.
” When a “theoretical” prediction disagrees with “experimental” data, what this tells us is that that there is a disagreement between two sets of theories, so we cannot say that any particular theory is falsified.”
.
Here, a quote of Wolfgang Pauli is appropriate: “That’s not right. That’s not even wrong.”
.
Mr Singham is not right; he is not even wrong.
.
Newtonian physics was not falsified; it was shown -- by experiment -- to be Not Quite True.
Special Relativity was not falsified; it was shown -- by experiment -- to be Not Quite True.
.
Mr Singham is using strawman argumentation.
.
A proper use and presentation of a theory is within the set of appropriate approximate assumptions. Outside said set of assumptions, the theory is either False/Not Quite True/Irrelevant; the theory may be relevant and Not Quite True.
.
“… What they need to do is produce a preponderance of evidence in support of their case, and they have not done so.”
.
Incorrect. Is the glass half-full? Is the glass half-empty? Same evidence, same observations, different conclusions. The non-CACA (Catastrophic Anthropogenic Climate Alteration) advocates are looking at the same data as the CACA advocates. And the various groups that Mr Singham opposed have presented evidence to support their theories. Mr Singham does not have the courage to examine their evidence. Not willing to see is worse than being actually blind. Either Mr Singham is a coward or is just a lazy and exhausted guy.
.
According to Mr Singham, theories cannot be falsified so how can he falsify the theories of those he opposed (primarily on political and ideological grounds)? He is in a logical bind.
.
If we make contact with extraterrestials, would their Science be identical to ours? Stephen Wolfram said: “Probably not”.
John Morales says
FFe:
Sure.
But an opinion by an electrician about electrical stuff is worth more than that of a non-electrician.
And an opinion by a carpenter about woodworking stuff, similarly so.
And, in this case, an opinion by an emeritus physics professor is similarly relatively more worthy.
Either it’s an opinion or it’s an argument — can’t have it both ways.
(heh)
Sunday Afternoon says
@Mano, #9:
Is this a trick question? Surely reality is what we’re trying to approximate? Understanding reality is of course somewhat tricky.
Personally, I really dislike the “all models are wrong, but some are useful” aphorism. I worry that it gives ammunition to the anti-science mendacious malevolence that has far too much influence in American life, viz: “So you admit that science is wrong!”
I have glibly suggested in other correspondence what I consider an improvement: “all models are wrong, but some are pretty fucking amazing!” Examples include, but are not limited to (physics background showing): the Standard Model, General Relativity, electromagnetism, micromagnetics, evolutionary theory, meteorology, etc.
I really should buy Mano’s book…
Mano Singham says
Sunday Afternoon @#18,
No, it was not a trick question. As you say, ‘reality’ is a tricky concept and there are deep discussions as to whether there is such a thing as an objective reality.
While that is an interesting question, it would have taken me too far afield to discuss in my book so I took it as a given that there is an objective reality. But when we talk about ‘approximations’, what we are referring to is not a comparison to the reality but to a representation of that reality. What I do question is whether there is a unique representation of that reality. It is only then that we can suggest that our theories are ‘approximations’ to it.
Please do buy the book! I think you will enjoy it.
consciousness razor says
Only then? I don’t get that. (Also, I can never make any sense of the “no objective reality” business, but I’m not touching that for now.)
What’s supposed to be the problem with multiple such representations (of a single reality)? I don’t think there’s a need for uniqueness. You’d just need to pick one out of however many there may be … or out of a subset of convenient choices that human beings may be able to comprehend, for instance. Presumably, it would be in mathematical/physical terms. Then you can talk about approximations of the one you picked (but not the others). It doesn’t seem like there’s anything contradictory about any of that.
Rob Grigjanis says
Mano @15:
No, I was referring there to energies higher than we can currently access -- “beyond energies accessible to us”.
But the main issue seems to be your dislike of the word ‘approximation’. I don’t get that. The Standard Model is an effective low energy theory which is not expected to give accurate predictions at arbitrarily large energies. Just because we don’t have a higher-energy theory to compare to the SM, or even a solid upper limit for the SM’s validity, doesn’t make it any less an approximation.
@9:
Right. In the case of the SM, the standard is an as-yet unknown higher-energy theory.
Mano Singham says
consciousness razor @#20,
If there can be multiple representations of reality and there is no ‘correct’ one, then what you create with your theories is just a representation and it does not make sense to call it an approximation. What is it an approximation to?
As soon as you start talking about right and wrong and approximations, you are, at least implicitly, saying that there is a standard to which you can compare and make judgments.
As an example, say I decide to draw an animal. The problem is that there are many different animals. The closer my drawing approximates a dog (say) the farther it gets from being a snake. So it would not make sense to call my drawing an approximation of a generic animal. But at least in that case we have some knowledge of animals in general.
Now take it farther and assume that I am drawing a picture of an extraterrestrial being. In what sense can I claim that it is an approximation of an actual extraterrestrial? I can make my drawing more and more detailed but would it be an approximation in the absence of any idea of what an extraterrestrial looks like?
Mano Singham says
Rob @#21,
(This response builds on my response to consciousness razor @# 22.)
When you write that “the standard is an as-yet unknown higher-energy theory’, you seem to be acknowledging that there is a single, correct, higher energy theory and that other theories can be compared to it. My claim is that we do not know if such a unique theory exists (it may or may not) and in the absence of that knowledge, it does not make sense to make such a comparison.
There is nothing special about the energy frontier. New phenomena that upends conventional wisdom can occur anywhere, as for example the ones that a led to the oxygen theory of combustion.
What we have are theories that make predictions. Whenever we make a prediction, we are entering uncharted territory that may cross the energy frontier or some other frontier. We may encounter phenomena that require us to invent new theories. These new theories will be different from the old but what I am challenging is the claim that the new theory is a closer approximation to some unknown and unknowable standard.
consciousness razor says
No, I was talking about multiple possible representations, and all of them accurately/correctly/truthfully represent the world. It’s not that they’re all wrong, that no such things exist, or what have you.
Being different as representations doesn’t imply that they would contradict one another or invalidate one another. Anything we might come up with (different forms of math, languages, images, etc.) would be sufficient, as long as it can give an accurate representation of the world and not something false or inaccurate. There doesn’t have to be something like “the” accurate one, because there’s nothing impossible about there being many such things.
Imagine that there’s an alien species which comes up with something different from what humans would. Their sensory/cognitive abilities may be different, as well as their history and culture. Lots of accidents along the way may have shaped their approach to the types of math, sciences, artforms, and so forth that they will use, which means they’d represent this same world in very different terms. That’s okay. Such differences don’t imply that either or both of us are wrong about it.
And indeed, it shouldn’t surprise us at all if there are some pretty major differences like this between humans and an alien species. The really surprising thing would be if they did all look, act, think, and represent things just like we do.
Yes, and that standard should be reality itself. It’s not like we’re lacking a reality to speak of, so we have no problem there.
We represent things with numbers, among other things. We can’t write down the precise value of pi, so we use things like 3.14 instead. The latter is an example of an approximation. It’s not pi, which is also a number and thus (as a number) a representation. Unlike pi, it is a rational number, 314/100, which is an approximation of the actual value. We have lots more choices than the two I already wrote, of course: 157/50 or 942/300, for example. Or you could use something more precise like 3.14159, use various other formulas for your calculations, you may opt to represent things in terms of tau (2pi), and so forth.
Options like that are always on the table, and the existence of more than one doesn’t invalidate them all or lead to a contradiction of some sort. To get a little less abstract, the topic here is about natural laws, and I’m pretty happy with the basic Humean view. We make laws to describe and explain the world in a way which will be useful and comprehensible for us (not necessarily anyone else). And those are not something extra in the world, in addition to the set of all of the particular facts (this atom here, that atom there, moving this way and that way, etc. ad nauseum).
They just give us a picture of the world, which we need in order to do the things we’d like to do with them: describe things accurately, make predictions, explain stuff so that we’re satisfied with our understanding of it, make new technologies, develop more and better theories than what we currently have, etc. If there are others which would also do the job (if not for us, then perhaps for somebody else), I think that’s perfectly okay.
njd3603 says
I understand the difficulties with falsifying a theory, but do you feel differently about hypotheses? Hypotheses get tested, implying that the test can be failed and the hypothesis rejected. A hypothesis that can’t be tested, even in principle, might not be so useful.
grahamjones says
Mano at 8:
I feel baffled by this. Reading other replies you have given (drawing extraterrestials??) made it worse not better.
OK, now I have slept on it, perhaps things are clearer. I am no philosopher, but loosely, I subscribe to instrumentalism:
There is no standard for rightness or wrongness. You are looking at the wrong word in “All models are wrong, but some are useful”. You should be looking at “useful”.
An example. Two models A and B for the atmosphere and oceans lead to somewhat different weather forecasts. Model A is more accurate for wind velocity at sea, model B is better at precipitation over land. Fishermen and farmers disagree about which is most useful. The government will fund a fancy new supercomputer for one model but not two. A decision must be made.
Statistical decision theory requires that the person making the decision supplies a utility function, which is basically a numerical way to capture “usefulness”. Like a Bayesian prior, a utility function is subjective. (I don’t know if George Box chose “useful” with this sort of notion in mind, but it seems an obvious link for a statistician to make.)
I don’t like “successful predictions” because it sounds like an attempt to hang on to objectivity once truth is relinquished as a goal. It sounds like we should all be able to see whether success has been achieved and agree one way or the other. The usefulness of a model is subjective, and I think that failing to admit this is trying to hold on to something which is ultimately indefensible. The descriptions in your book involving “preponderance of evidence” and sort-of legal arguments involving “scientific logic” and “assertions of a positive nature” and so on, will not choose between models A and B. You can’t keep subjectivity or politics out.
Mano Singham says
njd3603 @25,
There is no fundamental difference between hypotheses and theories in this context. Both can be tested in the sense that they each make predictions and one can see if the results of the tests line up with expectations. Neither can be falsified in the sense most commonly used where a single wrong prediction is decisive but they can, using the weight evidence arrived at from numerous tests, be judged by credible experts to be not worth pursuing. Whether one calls this result ‘falsifying’ or not is a matter of taste.
Mano Singham says
graham jones @#26,
If the aphorism “All models are wrong, but some are useful” is replaced by “Different models have differing degrees of usefulness with respect to specified purposes”, then I would have no problem with it. That seems to be what you are saying. But then I then the aphorism becomes somewhat banal.
I think that it is the word ‘wrong’ that gives the original aphorism its punch but it then creates a misleading impression. We do not know if there is a ‘right’ model out there or not waiting to be discovered but if there is, then how can one make the claim that all models are wrong? One may have stumbled upon the right one.
njd3603 says
mano singham @27,
One difference between a hypothesis and a theory is that the former can be very simple -- for example, “The resistance of a piece of nichrome wire (of given diameter and length) does not change as the voltage across it is increased (from 2V to 10V)”. This can be tested, and falsified. (I’m a school physics teacher -- the hypotheses that my students test tend to be simple!) It’s true that students often describe their results by saying that “the resistance is fairly constant / doesn’t change significantly” because that’s what they think the results “should be”, but once everyone in the class has got the same result they are willing to agree that the resistance of the wire isn’t constant in this case.
I’m sure you don’t disagree with any of this -- I’m just saying that, especially in teaching, there are plenty of hypotheses that students really can falsify, and at this level I don’t think anything is to be gained by making things more complicated. I think you might disagree with me here; if so, it would be interesting to know how you think the idea of testing hypotheses should be presented in school.
Mano Singham says
njd3603 @#29,
I actually discuss in my book the teaching of physics in schools. I analyze the example of F=ma in some detail instead of your choice of R=V/I staying constant even as V is changed but the idea is the same.
As you know, the value of R never stays the same as you change V. The first important step in increasing the sophistication of students at that level is to introduce the idea of experimental uncertainty. All measurements have some uncertainty associated with them and this leads to a range of values for the calculated quantity, which in your case is R. So the first thing to look at is if the values of R for different V are consistent within the range of uncertainty when we take the uncertainties of V and I into account. (I discuss how to do this for the F=ma case.) If they are consistent, then we can say that the evidence is not challenging the law. If they are not consistent (and this often happens in school science experiments), then that is evidence against the law being correct. That is a good cue to discuss with students that we cannot just claim that a law is false based on one or two discrepant results because that does not constitute a preponderance of evidence and that we have to live with the anomalies.
The idea of doing science experiments should be to teach students how to generate and evaluate evidence. Teaching this way is not hard to do but necessary if we are to give students a better understanding of the nature of science.
grahamjones says
Mano, you said:
And if we keep replacing less useful models with more useful models, surely the result will be inexorable progress?
If your book said that, I would have no problem with it!
Mano Singham says
grahamjones @#31,
While it is commonly asserted that science does show progress, does that imply that it is getting objectively better in some sense?
What ‘progress’ means and how it might be measured, if it can be measured at all, has no easy answer and thus is discussed in some detail in the book.
I’m sorry to keep referring to the book but the book consists of an extended and detailed argument that weaves together many different threads of the nature of science and it is not easy explain convincingly in a paragraph or two.
rblackadar says
@#29,
In hypothesis testing, you always have to look out for confounders, and unfortunately you can’t know if you’ve considered all of them. In the case of the high school physics test of Ohm’s Law with NiCr wire, of course the relevant confounder is temperature. (As I’m sure you know — indeed I assume it’s the whole point of the exercise.)
Do your students conclude that Ohm’s Law has been falsified and therefore should be rejected? Or is it rather that they need to know when it applies vs. when something additional is needed. Actually, this is a great example of Mano’s point, that every observation (even a simple one like this) is theory-laden.
@#16,
Regarding Special Relativity, it was known from the very beginning that it would not play nice with gravity; Einstein’s decade-long quest for something better (GR) was driven by theoretical concerns, not so much experiment. (Well, perhaps other than the well-known observation that the earth has not spiraled into the sun over 4.5 billion years.) When he applied GR to the perihelion of Mercury, that was an early success of the theory but AFAICT not something that motivated its initial development.