❦
Looking back on early quantum physics—not for purposes of admonishing the major figures, or to claim that we could have done better if we’d been born into that era, but in order to try and learn a moral, and do better next time—looking back on the dark ages of quantum physics, I say, I would nominate as the “most basic” error…
… not that they tried to reverse course on the last three thousand years of science suggesting that mind was complex within physics rather than fundamental in physics. This is Science, and we do have revolutions here. Every now and then you’ve got to reverse a trend. The future is always absurd and never unlawful.
I would nominate, as the basic error not to repeat next time, that the early scientists forgot that they themselves were made out of particles.
I mean, I’m sure that most of them knew it in theory.
And yet they didn’t notice that putting a sensor to detect a passing electron, or even knowing about the electron’s history, was an example of “particles in different places.” So they didn’t notice that a quantum theory of distinct configurations already explained the experimental result, without any need to invoke consciousness.
In the ancestral environment, humans were often faced with the adaptively relevant task of predicting other humans. For which purpose you thought of your fellow humans as having thoughts, knowing things and feeling things, rather than thinking of them as being made up of particles. In fact, many hunter-gatherer tribes may not even have known that particles existed. It’s much more intuitive—it feels simpler—to think about someone “knowing” something, than to think about their brain’s particles occupying a different state. It’s easier to phrase your explanations in terms of what people know; it feels more natural; it leaps more readily to mind.
Just as, once upon a time, it was easier to imagine Thor throwing lightning bolts, than to imagine Maxwell’s Equations—even though Maxwell’s Equations can be described by a computer program vastly smaller than the program for an intelligent agent like Thor.
So the ancient physicists found it natural to think, “I know where the photon was… what difference could that make?” Not, “My brain’s particles’ current state correlates to the photon’s history… what difference could that make?”
And, similarly, because it felt easy and intuitive to model reality in terms of people knowing things, and the decomposition of knowing into brain states did not leap so readily to mind, it seemed like a simple theory to say that a configuration could have amplitude only “if you didn’t know better.”
To turn the dualistic quantum hypothesis into a formal theory—one that could be written out as a computer program, without human scientists deciding when an “observation” occurred—you would have to specify what it meant for an “observer” to “know” something, in terms your computer program could compute.
So is your theory of fundamental physics going to examine all the particles in a human brain, and decide when those particles “know” something, in order to compute the motions of particles? But then how do you compute the motion of the particles in the brain itself? Wouldn’t there be a potential infinite recursion?
But so long as the terms of the theory were being processed by human scientists, they just knew when an “observation” had occurred. You said an “observation” occurred whenever it had to occur in order for the experimental predictions to come out right—a subtle form of constant tweaking.
(Remember, the basics of quantum theory were formulated before Alan Turing said anything about Turing machines, and way before the concept of computation was popularly known. The distinction between an effective formal theory, and one that required human interpretation, was not as clear then as now. Easy to pinpoint the problems in hindsight; you shouldn’t learn the lesson that problems are usually this obvious in foresight.)
Looking back, it may seem like one meta-lesson to learn from history, is that philosophy really matters in science—it’s not just some adjunct of a separate academic field.
After all, the early quantum scientists were doing all the right experiments. It was their interpretation that was off. And the problems of interpretation were not the result of their getting the statistics wrong.
Looking back, it seems like the errors they made were errors in the kind of thinking that we would describe as, well, “philosophical.”
When we look back and ask, “How could the early quantum scientists have done better, even in principle?” it seems that the insights they needed were philosophical ones.
And yet it wasn’t professional philosophers who swooped in and solved the problem and cleared up the mystery and made everything normal again. It was, well, physicists.
Arguably, Leibniz was at least as foresightful about quantum physics, as Democritus was once thought to have been foresightful about atoms. But that is hindsight. It’s the result of looking at the solution, and thinking back, and saying, “Hey, Leibniz said something like that.”
Even where one philosopher gets it right in advance, it’s usually science that ends up telling us which philosopher is right—not the prior consensus of the philosophical community.
I think this has something fundamental to say about the nature of philosophy, and the interface between philosophy and science.
It was once said that every science begins as philosophy, but then grows up and leaves the philosophical womb, so that at any given time, “Philosophy” is what we haven’t turned into science yet.
I suggest that when we look at the history of quantum physics and say, “The insights they needed were philosophical insights,” what we are really seeing is that the insight they needed was of a form that is not yet taught in standardized academic classes, and not yet reduced to calculation.
Once upon a time, the notion of the scientific method—updating beliefs based on experimental evidence—was a philosophical notion. But it was not championed by professional philosophers. It was the real-world power of science that showed that scientific epistemology was good epistemology, not a prior consensus of philosophers.
Today, this philosophy of belief-updating is beginning to be reduced to calculation—statistics, Bayesian probability theory.
But back in Galileo’s era, it was solely vague verbal arguments that said you should try to produce numerical predictions of experimental results, rather than consulting the Bible or Aristotle.
At the frontier of science, and especially at the frontier of scientific chaos and scientific confusion, you find problems of thinking that are not taught in academic courses, and that have not been reduced to calculation. And this will seem like a domain of philosophy; it will seem that you must do philosophical thinking in order to sort out the confusion. But when history looks back, I’m afraid, it is usually not a professional philosopher who wins all the marbles—because it takes intimate involvement with the scientific domain in order to do the philosophical thinking. Even if, afterward, it all seems knowable a priori; and even if, afterward, some philosopher out there actually got it a priori; even so, it takes intimate involvement to see it in practice, and experimental results to tell the world which philosopher won.
I suggest that, like ethics, philosophy really is important, but it is only practiced effectively from within a science. Trying to do the philosophy of a frontier science, as a separate academic profession, is as much a mistake as trying to have separate ethicists. You end up with ethicists who speak mainly to other ethicists, and philosophers who speak mainly to other philosophers.
This is not to say that there is no place for professional philosophers in the world. Some problems are so chaotic that there is no established place for them at all in the halls of science. But those “professional philosophers” would be very, very wise to learn every scrap of relevant-seeming science that they can possibly get their hands on. They should not be surprised at the prospect that experiment, and not debate, will finally settle the argument. They should not flinch from running their own experiments, if they can possibly think of any.
That, I think, is the lesson of history.