go to homepage

Gödel’s
Incompleteness
Theorems

(in passing)

by Miles Mathis



Theorem 1: In any logical system one can construct statements that are neither true nor false (mathematical variations of the liar’s paradox). Theorem 2: Therefore no consistent system can be used to prove its own consistency. No proof can be proof of itself.
       I have paraphrased and distilled the theorems here, since the intent of this short paper is not to rigorously prove or disprove Gödel’s theorems. It is simply to make comment on them in passing. It is my opinion that Gödel is in the main line of 20th century argument on logic and is therefore dismissable with it as a group. He does not merit extensive critique; I only need to show that his argument falls into a category that has already been falsified.

In short, I agree with theorem 2, but find that it does not follow from theorem 1. Theorem 2 is a weaker version of Karl Popper’s central thesis that all creative math and science is based on hypothesis and is not provable. Any statement in any form, beyond a tautology or strict deduction, is therefore falsifiable but unprovable. Gödel’s theory may have predated Popper’s by 3 years, but it does not matter since theorem 2 follows not from theorem 1 but from Popper’s extensive critique of positivism as a whole.

Any logical system is set up to avoid the necessity of proving axioms or postulates. A logical system is a closed system in which axioms and operations define the total extent of any consistency (or truth). The system as a whole therefore requires no proof, and any reference to anything outside the system in illogical in itself. Deductions require no proof beyond their own postulates. Only inductions or inferences require outside proof. That is to say, connecting one logical system with another requires proof. This proof is an additional set of axioms or operations that are also accepted without proof.
        Notice that there are two separate meanings to the word "proof" in the last paragraph. By one meaning a proof refers to a series of steps from a postulate to a result. By the other, a proof is assumed to be absolute evidence of truth. If we use this last meaning, which is the common usage, we must find that nothing can be proved, even tautologies. A clever philosopher can insert doubt even into A=A. But this is not what proof means in logic. According to the rules of logic, a proof must have a starting point. This starting point is an axiom or postulate. This postulate must be logically unprovable. To try to prove it is to misunderstand logic itself. But by the same token, if it is not provable as being true, it is not provable as being false. This does not make it a paradox. It makes it an assumption.
        Therefore theorem 1 appears to me to be true but trivial. These paradoxical statements can be ruled out simply by adding another axiom. For instance, any variation of the liar’s paradox can be avoided by adding this postulate: “no statements will be allowed that are self-referential, since these statements cause circles of logic. The content of every statement must apply to another statement and not to itself.” If you ask me to prove the truth of that added postulate I can reply to you that you do not understand the concept of postulates. I can postulate anything I like. You then get to try to falsify it by showing that it is inconsistent with other accepted postulates. You do not get to ask for proof.

Gödel also seems unclear on the concept of postulates or axioms. Axioms may be chosen at will, to create consistency out of nothing. An axiom is true because it is defined as true. Likewise, an operation is legal because I say it is. And a logical system is consistent because it has not been shown to be inconsistent. This is not the same as being true or proven.
       Popper did not waste time on a formal disproof of proofs, because he saw the absurdity of such a notion. To create a formal disproof of proofs in general is a contradiction in terms. Gödel’s disproof of completeness must be just as incomplete as any other proof. That is to say, the Incompleteness Theorem is itself incomplete, and therefore unprovable. It is useless except as another grand contradiction, and we are not in need of further logical contradictions posing as theorems.


Karl Popper

Popper critiqued Hilbert in the same way he critiqued the logical positivists. Hilbert’s problem was that he was trying to prove things that not only could not be proved, but that did not need to be proved. Russell & Whitehead were in the same position, mathematically. We do not need to prove that 1 + 1 = 2, since it is an axiom. It is true by definition. You cannot prove definitions and what is more, you don’t need to prove definitions. That is why they are definitions: so that we can avoid proving them. We decide to accept them arbitrarily and we see where they take us. If we can build systems on these definitions that are not shown to be inconsistent, then we can say these definitions are corroborated. But this does not make them true or proven. They are simply not disproven.

Popper’s theorems are much more important to history than Gödel’s for the simple reason that Popper’s theorems avoid inconsistency better than Gödel’s. The Incompleteness Theorem is itself a paradox, one that hinges on a grand contradiction. Popper’s theorems also have much more content. Popper explicitly tied his systems to other logical systems—like physics, mathematics, symbolic logic, gaming, probabilities, etc. He spent a lifetime discussing the implications of his theorems. He did not just drop a few paradoxes on the world and retreat into mysticism.
       It is no accident that Gödel’s name has become linked to Escher’s. They are equally profound and important. Meaning that they are not. They are simply fashions. Gödel has become more famous than Popper and more famous than any other 20th century mathematician or logician because his ideas—like Escher’s ideas for paintings—allowed for crossover into the mainstream, where they could become Bowdlerized and popularized. They could be linked, like facile interpretations of relativity and QED, to other fashions like Star Trek or Star Wars or Asimov’s science fiction.

~~~~~~~~~~~~

As a sort of addendum, I will show that Popper implied a way around or beyond the implication of his theory that “nothing is true or provable.” This method that I will relate in very broad terms also takes us around Gödel’s Incompleteness Theorems. Very often we choose definitions and axioms from a pool of statements that are empirically or intuitively very strong. That is, we can point to something that is innate or experienced as proof. But this kind of proof has never been accepted by philosophers, for various (not so good) reasons. If the statement “that dog is white” cannot be proved by pointing to a white dog, then what hope has anyone in proving anything? If the statement “birds know how to migrate” is not proved by billions of birds flying south and returning, then what chance had Russell of proving 1 + 1 = 2. This equality, once it has been shown to be mathematically beyond proof, must fall back upon an empirical proof—pointing to a couple of apples, for instance. I personally find the empirical proof complete, and I will now tell you why.
       Popper believed that tautologies escaped his falsifiability principle. Tautologies could not be falsified, since there was logically no possible falsifier. They were therefore true even when nothing else was. In fact, he thought that 1 + 1 = 2 was true because it was a tautology. He thought Russell’s proofs unnecessary because a tautology is the exception to the rule for falsification and for verification. If this is the case, then I can escape the requirement for being falsifiable by showing that pointing to a white dog is a tautology. It is to say, “That thing I am pointing at I define as a dog. The color of that thing I call white. Therefore that thing I am pointing at is a white dog.” Which is of course to call a white dog a white dog. A = A. How could you be wrong?
       In this way we get beyond both Gödel and Popper. We can “prove” a great number of things by showing them to be tautologies. The birds migrating is another example. “I define those things that you see flying as birds. I define migrating as doing what they are doing. Therefore they are migrating.” How could your statement be falsified? It turns out that a large percentage of language and even science can be shown to be tautological.
       You will say that this just means that it is trivial: it is empty of content. But it is not empty of content. It is full of definitions and axioms, which are quite rich in content. For instance, in the white dog example, you learned what a dog is and what white is, at least according to me. What else is learning, in most cases, but discovering what things are according to other people—parents, teachers, historians, writers, scientists, etc. As a person, you learn a huge list of definitions. A large percentage of what anyone knows is definitional. Learning what words mean is central to all intelligence. The meaning of words is definitional. It also tautological. “That is a dog; a dog is that.” A = A. So we already have a huge body of statements that are true because they are tautologies, and yet they are rich in content.
       Words also apply to actions, not just to things. Operations are actions. In this way I can prove many operations by showing them to be tautological definitions. The migrating birds is just one example.
“Those birds are migrating.”
“What is migrating?”
“Flying south for the winter.”
“What is winter?”
“Winter is now.”
“What is south?”
“South is the direction they are flying.”
       Tautology through and through. “I define those things doing what they are doing as doing what they are doing.”
       One of Gödel’s refinements of his first theorem was that it applied only to logical systems that were significantly rich in content—that were non-trivial. He provided no foolproof method for discovering which systems were rich and which were too poor, but it has been assumed that he was (at the very least) ruling out tautologies. I have just shown that tautologies are quite rich in content. In doing so I have fatally undercut Theorem 1.
       A logical system made up entirely of tautologies cannot admit of any liar’s paradoxes, since the liar’s paradox does not resolve to the form A = A. No liar’s paradox is a tautology and no tautology is a liar’s paradox. And yet tautologies are rich in content. Therefore Theorem 1 is falsified.
       I can go even further. Every deduction is a tautology, since the thing proved is contained in the axioms. The axioms are definitions, which are tautologies. The results are subsets of the axioms. Therefore all deductions are tautological. There is no more content in the result than was in the axioms. Therefore they cannot be falsified. If axioms cannot be falsified then results cannot be falsified.
       In fact, this was the reasoning of the ancient Greeks, who defined “prove” in just this way. You proved things by showing that your results were logically contained in your axioms. You did not prove anything by modern standards, since you found no “new information.” You only clarified old information. You re-expanded your definitions, reminding yourself of their full content. And yet the Greeks saw this as true learning, as true intelligence, as real knowledge, as non-trivial.
       In this way we have a huge body of knowledge that we can call true. And none of it needs to be proven, beyond simple deductive proofs, because it is all definitional or axiomatic. It is all tautological.


Bertrand Russell

You may say that my use of the tautology is in many ways similar to Russell’s. It is, the difference being that Russell and Whitehead vastly overcomplicated the problem by talking of the “class of all classes” and other such mathematical goobledygook. This fake rigor, learned from Peano, Frege, Cantor and the rest only gave other quibblers like Gödel room to move in and set up paradoxes. I have shown that Gödel does not do any real damage to Russell (or anyone else) but Popper did great damage to Russell. Popper made Russell appear more than a bit ridiculous for spending thousands of pages axiomatizing things that were already axioms to begin with. Peano is also guilty of wasting reams of paper, giving us 9 axioms where one or two would do—and had done for centuries.
      Others may claim that my empirical proof is just like the positivist's search for a sense datum or observation-statement, or that it mirrors the "correspondence theory." But it does not. I claim that an empirical proof is in fact a tautology. A proof by pointing does not become true or verified by correspondence. It is true because it is already axiomatic before any proof is begun. My theory is much stronger because it avoids the necessity of verification or proof or correspondence. A tautology does not require proof or verification. It is not a correspondence. It is a definitional equality. According to the rules of my logic, asking for proof or verification of a tautology is a contradiction. This logic forbids quibbling or cavilling by old-fashioned (one might say Greek) means: it defines as contradictory any request for verification of axioms, postulates, definitions or tautologies. I could go deeper into the subject, but I don't find it useful to do so. Popper believed that arguing about semantics was a waste of time. I find foundational logic an even greater waste of time, since in no area of philosophy is so little achieved with so much energy.
       Why does it matter, you may ask, that Peano or Russell spent their time rigorously proving tautologies. They did no harm, and were at least in the right camp. They were earnest. It matters because there was (and still is) so much real work to be done in math and physics. When the great mathematicians and physicists are wasting their time with trivialities and absurdities, it must be of some concern to history and to future scientists and mathematicians. Personally, it concerns me because I see that Russell and the great mathematicians of the time might have been proofreading Einstein and Newton and Cauchy and Cantor and so on. Re-axiomatizing tautologies while Cantor is adding one to infinity and Einstein is making simple albegraic errors and Heisenberg is jumping to conclusions might be called fiddling while Rome burns. In fact, Russell was one of the great popularizers of Einstein, with the book The ABC of Relativity. Knowing what I do, I cannot get past that fact. A great mathematician would have seen the error in Einstein’s simple algebraic equations. That is the bottom line. I am not concerned that Russell was against the bomb or the war or anything else. Just as I am not concerned that Einstein was against the bomb or the war. I am concerned that mathematicians and physicists cannot do algebra.
       I could spend a thousand pages combing Russell and Whitehead’s Principia Mathematica, showing where the inconsistencies are and sorting out all the various claims of Frege and Russell and Gödel, but after Popper it is hardly worth it, you see. Any sort of positivism has been conclusively shown to be a doomed (and frankly pathetic) enterprise. Further axiomatizing things that are already axioms is like exhausting an infinite series by actual addition. It can’t but appear like busywork. I think this was known even in Russell’s time. The entire value of refuting a previous logician was that you got to take over his title. Russell took Frege’s title as number one logician. Gödel then took Russell’s title. But in this way they were more like chess champions than useful problem solvers. Chess champions are no more useful to society than football stars. They create nothing more than entertainment.

Russell’s Paradox is directly analogous to Gödel’s Incompleteness Theorems. Russell destroyed Frege’s naïve set theory with one simple paradox, just as Gödel destroyed Russell’s axiomatic set theory. But none of the destruction is logical in itself. With hindsight, it is not clear why Frege did not just add another axiom to rule out Russell’s paradox. Historically that is exactly what has happened. ZFC (Zermelo-Fraenkel/Choice) set theory adds several axioms that disallow the creation of the known cancerous paradoxes. It does so in much the same way I did naively above—by disallowing self-referential statements. The paradoxes of Russell and Gödel both concerned statements about themselves. In logical parlance, they were sets that included themselves. ZFC axiomatically disallows them, and most contemporary logicians accept the consistency of ZFC.
       The problem with this is that it is itself inconsistent to believe in ZFC and yet think that Russell’s and Gödel’s paradoxes were important or meaningful. If ZFC is correct, then there is very little wrong with naïve set theory, and the historical machinations look more than a bit ridiculous. It was all a tempest in a teapot. It was more than a century of work to solve something that should have taken about a week. Zermelo in fact knew of Russell’s Paradox before Russell sent his letter to Frege, but dismissed it as trivial. He did not even think it worth addressing until Frege committed mathematical suttee on the funeral pyre of naïve set theory. It was Frege’s extreme reaction that caused instability in the world of math, not the paradox itself. One would think that a mathematician, especially one trained in all the arts of equation-finessing by his master Cantor [see my paper on Cantor], could figure out how to transcend one self-referential paradox.

~~~~~~~~~~~~

Popper’s analysis of how we come to have knowledge is much closer to correct than the classical analysis. It is what might be called operational epistemology. It is in stark opposition to traditional epistemology, which gets lost in trying to generalize specific statements. Millenia have been wasted trying to prove so-called “universals” when in fact universals are not pertinent to the question of learning. Just as we do not need to prove axioms, we also do not need to prove universals. 9/10’s of our knowledge is of the kind I have just enumerated, which is based on tautology, or definition and deduction. The rest is of the kind that Popper describes as scientific. It is hypothetical and relies for its content on it falsifiability. Universals play no real part in practical epistemology. They have a very limited use, and that use is completely abstract. Infinite abstractions, or universals, are beyond proof.
       But this status of being beyond proof is no danger to truth, knowledge, science or mathematics. Absolutely nothing is lost by letting universals remain unproven. The one thing that classical epistemology has never explained is why we need to prove universals. This was Popper’s great addition to philosophy. It was his first question to the positivists. “What would we lose if we could not ‘induce’ general statements or universals from singular statements?” The answer: nothing. None of our knowledge really relies on such an induction. All the things that normal people care about calling true can still be called true, since they are true by definition. Apples are red, dogs bark, 2 + 2 = 4, and so on. And science can do without universals since it always has done without universals. Its method is not inductive, it does not claim certainty, and its results are not verifiable. Its results are only more or less falsifiable.

In this way you can see that universals are analogous to the infinite in math. A universal statement is a statement taken to infinity, taken to the limit. The analogy is even tighter, since both the infinite and the universal have proven to be giant bricks in the wall in their respective fields. In philosophy a really extraordinary amount of time has been spent discussing the universal, when its importance is almost zero. The same can be said of the infinite in math. Very little of use has come from studying the infinite. The entire history of the transfinite has been a misdirection caused by an error. Even the calculus does not rely on the infinite, as I have shown elsewhere. Expanding equations using Taylor series and other methods have proven useful, but these methods would not be necessary had the calculus been established on the constant differential to begin with, instead of the diminishing differential. Problem solving in all areas would have an entirely different character now if Archimedes had not pursued infinite series solutions to quadratures, giving the hint to Newton and Leibniz and the rest.


If this paper was useful to you in any way, please consider donating a dollar (or more) to the SAVE THE ARTISTS FOUNDATION. This will allow me to continue writing these "unpublishable" things. Don't be confused by paying Melisa Smith--that is just one of my many noms de plume. If you are a Paypal user, there is no fee; so it might be worth your while to become one. Otherwise they will rob us 33 cents for each transaction.