❦
Traditional Rationality is phrased as social rules, with violations interpretable as cheating: if you break the rules and no one else is doing so, you’re the first to defect—making you a bad, bad person. To Bayesians, the brain is an engine of accuracy: if you violate the laws of rationality, the engine doesn’t run, and this is equally true whether anyone else breaks the rules or not.
Consider the problem of Occam’s Razor, as confronted by Traditional philosophers. If two hypotheses fit the same observations equally well, why believe the simpler one is more likely to be true? You could argue that Occam’s Razor has worked in the past, and is therefore likely to continue to work in the future. But this, itself, appeals to a prediction from Occam’s Razor. “Occam’s Razor works up to October 8th, 2027 and then stops working thereafter” is more complex, but it fits the observed evidence equally well.
You could argue that Occam’s Razor is a reasonable distribution on prior probabilities. But what is a “reasonable” distribution? Why not label “reasonable” a very complicated prior distribution, which makes Occam’s Razor work in all observed tests so far, but generates exceptions in future cases?
Indeed, it seems there is no way to justify Occam’s Razor except by appealing to Occam’s Razor, making this argument unlikely to convince any judge who does not already accept Occam’s Razor. (What’s special about the words I italicized?)
If you are a philosopher whose daily work is to write papers, criticize other people’s papers, and respond to others’ criticisms of your own papers, then you may look at Occam’s Razor and shrug. Here is an end to justifying, arguing and convincing. You decide to call a truce on writing papers; if your fellow philosophers do not demand justification for your un-arguable beliefs, you will not demand justification for theirs. And as the symbol of your treaty, your white flag, you use the phrase “a priori truth.”
But to a Bayesian, in this era of cognitive science and evolutionary biology and Artificial Intelligence, saying “a priori” doesn’t explain why the brain-engine runs. If the brain has an amazing “a priori truth factory” that works to produce accurate beliefs, it makes you wonder why a thirsty hunter-gatherer can’t use the “a priori truth factory” to locate drinkable water. It makes you wonder why eyes evolved in the first place, if there are ways to produce accurate beliefs without looking at things.
James R. Newman said: “The fact that one apple added to one apple invariably gives two apples helps in the teaching of arithmetic, but has no bearing on the truth of the proposition that 1 + 1 = 2.” The Internet Encyclopedia of Philosophy defines “a priori” propositions as those knowable independently of experience. Wikipedia quotes Hume: Relations of ideas are “discoverable by the mere operation of thought, without dependence on what is anywhere existent in the universe.” You can see that 1 + 1 = 2 just by thinking about it, without looking at apples.
But in this era of neurology, one ought to be aware that thoughts are existent in the universe; they are identical to the operation of brains. Material brains, real in the universe, composed of quarks in a single unified mathematical physics whose laws draw no border between the inside and outside of your skull.
When you add 1 + 1 and get 2 by thinking, these thoughts are themselves embodied in flashes of neural patterns. In principle, we could observe, experientially, the exact same material events as they occurred within someone else’s brain. It would require some advances in computational neurobiology and brain-computer interfacing, but in principle, it could be done. You could see someone else’s engine operating materially, through material chains of cause and effect, to compute by “pure thought” that 1 + 1 = 2. How is observing this pattern in someone else’s brain any different, as a way of knowing, from observing your own brain doing the same thing? When “pure thought” tells you that 1 + 1 = 2, “independently of any experience or observation,” you are, in effect, observing your own brain as evidence.
If this seems counterintuitive, try to see minds/brains as engines—an engine that collides the neural pattern for 1 and the neural pattern for 1 and gets the neural pattern for 2. If this engine works at all, then it should have the same output if it observes (with eyes and retina) a similar brain-engine carrying out a similar collision, and copies into itself the resulting pattern. In other words, for every form of a priori knowledge obtained by “pure thought,” you are learning exactly the same thing you would learn if you saw an outside brain-engine carrying out the same pure flashes of neural activation. The engines are equivalent, the bottom-line outputs are equivalent, the belief-entanglements are the same.
There is nothing you can know “a priori,” which you could not know with equal validity by observing the chemical release of neurotransmitters within some outside brain. What do you think you are, dear reader?
This is why you can predict the result of adding 1 apple and 1 apple by imagining it first in your mind, or punch “3 × 4” into a calculator to predict the result of imagining 4 rows with 3 apples per row. You and the apple exist within a boundary-less unified physical process, and one part may echo another.
Are the sort of neural flashes that philosophers label “a priori beliefs” arbitrary? Many AI algorithms function better with “regularization” that biases the solution space toward simpler solutions. But the regularized algorithms are themselves more complex; they contain an extra line of code (or 1,000 extra lines) compared to unregularized algorithms. The human brain is biased toward simplicity, and we think more efficiently thereby. If you press the Ignore button at this point, you’re left with a complex brain that exists for no reason and works for no reason. So don’t try to tell me that “a priori” beliefs are arbitrary, because they sure aren’t generated by rolling random numbers. (What does the adjective “arbitrary” mean, anyway?)
You can’t excuse calling a proposition “a priori” by pointing out that other philosophers are having trouble justifying their propositions. If a philosopher fails to explain something, this fact cannot supply electricity to a refrigerator, nor act as a magical factory for accurate beliefs. There’s no truce, no white flag, until you understand why the engine works.
If you clear your mind of justification, of argument, then it seems obvious why Occam’s Razor works in practice: we live in a simple world, a low-entropy universe in which there are short explanations to be found. “But,” you cry, “why is the universe itself orderly?” This I do not know, but it is what I see as the next mystery to be explained. This is not the same question as “How do I argue Occam’s Razor to a hypothetical debater who has not already accepted it?”
Perhaps you cannot argue anything to a hypothetical debater who has not accepted Occam’s Razor, just as you cannot argue anything to a rock. A mind needs a certain amount of dynamic structure to be an argument-acceptor. If a mind doesn’t implement Modus Ponens, it can accept “A” and “A → B” all day long without ever producing “B.” How do you justify Modus Ponens to a mind that hasn’t accepted it? How do you argue a rock into becoming a mind?
Brains evolved from non-brainy matter by natural selection; they were not justified into existence by arguing with an ideal philosophy student of perfect emptiness. This does not make our judgments meaningless. A brain-engine can work correctly, producing accurate beliefs, even if it was merely built—by human hands or cumulative stochastic selection pressures—rather than argued into existence. But to be satisfied by this answer, one must see rationality in terms of engines, rather than arguments.