It is well-known that constructivists (supposedly) renounce “proof by contradiction”, and that classicists scoff at the critique. “Those fools,” the criticism goes, “want to rule out proofs by contradiction. How absurd! Look, Pythagoras showed that the square root of two is irrational by deriving a contradiction from the assumption that it is rational. There is nothing wrong with this. Ignore them!”

On careful examination the critique fails miserably, because *a proof by contradiction is not a proof that derives a contradiction*. Pythagoras’s proof is perfectly valid, one of the eternal gems of mathematics. No one, but no one, questions the validity of that line of argument. Despite the righteous fury, no punch has landed. We may as well laugh at the flat earthers, who are of course dead wrong, but also nonexistent.

Pythagoras’s Theorem expresses a negation: *it is not the case that* the square root of two can be expressed as the ratio of two integers. Of course one proves negative statements by contradiction! Assume that it can be so represented. A quick deduction shows that this is impossible. So the assumption is false. Done. This is a* direct proof* of a negative assertion; it is *not* a “proof by contradiction”, which is also known as an *indirect proof*.

What, then, is a proof by contradiction? It is the *affirmation* of a positive statement by refutation of its denial. It is a *direct proof* of the negation of a negative assertion that is then pressed into service as a *direct proof* of the assertion, which it is not.* *Anyone is free to ignore the distinction for the sake of convenience, as a philosophical issue, or as a sly use of “goto” in a proof, but the distinction nevertheless exists and is important. Indeed, part of the beauty of constructive mathematics is that one can draw such distinctions, for, once drawn, one can selectively disregard them. Once blurred, forever blurred, a pure loss of expressiveness.

For the sake of clarity, let me rehearse a standard example of a proof by contradiction. The claim is that there exists irrationals a and b such that a to the b power is rational. Here is an indirect proof, a true proof by contradiction. Move number one, let us prove instead that it is impossible that any two irrationals a and b are such that a to the b is irrational. This is a negative statement, so of course one proves it be deriving a contradiction from it. But it is not the original statement! This will be clear from examining the information content of the proof.

Suppose, for a contradiction, that every two irrationals a and b are such that a to the b power is irrational. We know from Pythagoras that root two is irrational, so plug it in for both a and b, and conclude that root two to the root two power is irrational. Now use the assumption again, taking a to be root two to the root two, and b to be root two. Calculate a to the power of b, it is two, which is eminently rational. Contradiction.

We have now proved that it is not the case that every pair of irrationals, when exponentiated, give an irrational. There is nothing questionable about this proof as far as I am aware. But it does not prove that there are two irrationals whose exponent is rational! If you think it does, then I ask you, please name them for me. That information is not in this proof (there are other proofs that do name them, but that is not relevant for my purposes). You may, if you wish, disregard the distinction I am drawing, that is your prerogative, and neither I nor anyone has any problem with that. But you cannot claim that it is a *direct proof*, it is rather an *indirect proof*, that proceeds by refuting the negative of the intended assertion.

So why am I writing this? Because I have learned, to my dismay, that in U.S. computer science departments–of all places!–students are being taught, *erroneously,* that any proof that derives a contradiction is a “proof by contradiction”. It is not. Any proof of a negative must proceed by contradiction. A proof by contradiction in the long-established sense of the term is, contrarily, an indirect proof of a positive by refutation of the negative. This distinction is important, even if you want to “mod out” by it in your work, for it is only by drawing the distinction that one can even define the equivalence with which to quotient.

That’s my main point. But for those who may not be familiar with the distinction between direct and indirect proof, let me take the opportunity to comment on why one might care to draw such a distinction. It is entirely a matter of intellectual honesty: the information content of the foregoing indirect proof does not fulfill the expectation stated in the theorem. It is a kind of boast, an overstatement, to claim otherwise. Compare the original statement with the reformulation used in the proof. The claim that it is not the case that every pair of irrationals exponentiate to an irrational is uncontroversial. The proof proves it directly, and there is nothing particularly surprising about it. One would even wonder why anyone would bother to state it. Yet the supposedly equivalent claim stated at the outset appears much more fascinating, because most people cannot easily think up an example of two irrationals that exponentiate to rationals. Nor does the proof provide one. Once, when shown the indirect proof, a student of mine blurted out “oh that’s so cheap.” Precisely.

Who cares about the information content of proofs? You may not, but I do. It matters, at least in computer science, because *proofs are programs** *and *propositions are specifications*. You may well not care, as you are well entitled to do. But you may also find one day that you do care. For example, Vladimir Voevodsky’s univalence principle, which codifies classical mathematical practices, is *only* sensible in a constructive framework in which the distinction cannot be disregarded. Don’t be so sure you don’t care!

A related paper on programing languages: https://www.microsoft.com/en-us/research/wp-content/uploads/2006/01/not-not-ml.pdf

IL has “proof by negation” (corresponding to lambda) and “derive a contradiction” (corresponding to application) constructs, and a type system which cleanly separates eager values from lazy (possibly nonterminating) value-thunks.

Bob, perhaps it is also interesting to note (as one formal illustration of what you say) that already in minimal implicational logic (i.e. simple typed lambda calculus) we can prove —A implies -A, where -A is standardly represented as the implication: A implies bottom (bottom being the absurdity).

The proof, of course, is:

lambda x:—A. lambda y:A. x (lambda z:-A. z y).

So, we see that double negation elimination (which is equivalent to the non-constructive principle A v -A) is provable constructively on negated formulae. This fact is at work in the negative translation from classical logic into constructive logic by Kolmogorov, Gödel, Gentzen.

Yours

Jakob

PS: For typographical reasons, I should have written —A as

– – – A to avoid concatenating “–” into a single line! Jakob

Dear Bob,

I wonder what you meant by this remark: “Vladimir Voevodsky’s univalence principle, which codifies classical mathematical practices, is only sensible in a constructive framework in which the distinction cannot be disregarded”.

Mike clarifies in his comment. You cannot just postulate that all types are either empty or non-empty without contradiction in the presence of univalence. More refined notions of classicality are of course possible, eg that all propositions, construed as -1-truncated types, are decidable.

Thanks.

Re: “You cannot just postulate that all types are either empty or non-empty without contradiction in the presence of univalence.”

A proof of non-emptiness has no information content, so this is the Law of Excluded Middle, which is legitimate in the presence of univalence. Perhaps you mean to say that in the presence of univalence you can’t have a function that provides an element of every non-empty set.

To avoid propagating a further misconception, I think it would be appropriate to emphasize that univalence

iscompatible with the propositionally truncated form of proof by contradiction. In other words, if you’re proving something whose proof carries no information content (other than the mere truth of a statement), then univalence, at least, raises no objection to doing it indirectly by proving that its negation is false. Moreover, this weaker form of proof by contradiction seems in practice to be sufficient for the majority of “classical mathematics”. Of course, propositional truncation also “discards information”, so if you want to retain the maximal information in proofs one may want to avoid it too. My point is just that you seemed to be saying that univalence is adifferentreason to avoid proof by contradiction, and I don’t think it is such a reason all by itself.Good point.

@jonsterling

> Constructive math, as a paradigm which allows us to retain meaningful distinctions, is the ideal setting in which to develop these concepts and experiment with them scientifically.

The concepts you mention, however, are rather specific. What if I wanted to explore distinctions between different kinds of contradiction in order to experiment with paraconsistent real analysis? Would constructive math still be the ideal setting for that?

AFAIK, there can be two justifications for the choice of a formal logic: philosophical and ad hoc. For example, a philosophical justification for constructive math could be Brouwer’s intuitionism; an ad hoc justification can be that it makes it more convenient to write minimal cores for some proof assistants.

Statements such as “an ideal setting”, however, can be vague and should be more precise. Any logic that is weaker than another can make more distinctions. It is an ideal setting to “scientifically” explore the particular class of questions it was designed to explore (assuming we don’t use an a priori philosophical justification). I fail to see, however, what makes it more or less “scientific” (or what that word means in the context in which you’re using it). One could therefore argue that reverse mathematics in general is how we should explore such distinctions, but reverse mathematics is not necessarily the question a practitioner is interested in.

I think this post is begging the question by accepting as “obvious” certain informal philosophical axioms and using that to justify their formalization, presenting _other_ formalizations as suspect but never questioning the assumed informal axioms. For example, why is there nothing wrong with the principle of explosion, but the law of excluded middle is no longer intellectually honest? Why is it assumed that in a proof of existence I must name a specific example, yet in a proof of non-universality I do not need to name the specific counter-example? What is the ultimate justification for this?

It is perfectly true that some information is lost if we make use of LEM, but the same is true for using the principle of explosion — you lose the specific source of the contradiction. Why is the loss of one type of information “important” while the other isn’t? I could be wrong about this, but I think most logics — like most computations — erase some information, even proof-relevant ones (I believe a realizer is not required to have enough information to compute the input). So who decides which information is important and which isn’t?

I am sure there are good answers to those questions but they stem from what you want to _do_ with the logic rather than from eternal informal axioms which are “obviously” true.

Also, flat-earthers still exist: https://theflatearthsociety.org/home/

Without detracting from the content of this post, a minor correction: the statement which you (perhaps colloquially) call ‘Euclid’s Theorem’ is not Euclid’s Theorem, which states that there are infinitely many primes.

I think, but am not sure, that even attributing the proof of square root’s irrationality shouldn’t be attributed to Euclid.

Of course, since you start by quoting uninformed critics of constructivism, this error can reflect them being similarly uninformed about mathematical history.

Posted for Anthony Clayden, who is having trouble with WordPress at the moment:

“Euclid shows that the square root of two is irrational by

deriving a contradiction from the assumption that it is. ”

I find that unclear/contradictory as stated. I think you

mean: Euclid shows that the square root of two is not

rational, by deriving a contradiction from the assumption

that it is rational.

And what I’m interested in is how to identify whether a

theorem “expresses a negation”, so is susceptible to ‘direct

proof’ deriving a contradiction, vs. “expresses a …” (what

exactly?) so is susceptible of ‘indirect proof’/by

contradiction for a classicist but not a constructivist.

Do you have in mind some atomic formulation of theorems,

such that you can always tell positives from negatives? The

appearance or not of a negation sign is surely arbitrary and

depends on the choice of meaning of the atomic predicates we

use to express it? (IOW, for this example, why do you regard

‘irrational’ as non-atomic and unpack it to ‘not rational’?

Could we have a definition that makes irrational atomic, and

defines rational in terms of it?)

Thanks, I’ve made a mild correction to clarify. I am assuming a “standard” parse of the statements, so that “irrational” is “not rational”, so that the outermost form is a negation. It would be possible, I imagine, to define “irrational” positively, but this wouldn’t address my main, rather elementary, concern. It’s just a convenient example of a genuine proof by contradiction.

It is indeed possible to consider redefining certain concepts which are currently implemented as negations, as positive propositions. From a classical point of view, usually this will make no difference, but constructively it matters. An example is the difference between inequality and apartness in constructive analysis: whereas there is not (in general) any non-trivial content to a proof that two real numbers are not equal, a proof that they are *apart* would require exhibiting some concrete data which demonstrates this.

Constructive math, as a paradigm which allows us to retain meaningful distinctions, is the ideal setting in which to develop these concepts and experiment with them scientifically. It is often quite an ordeal to find the right combination of “proof relevance”, negativity and positivity for our definitions such that the desired theorem is actually true: but once we have done so, I think we have learnt something interesting.

This comes up, for instance, in the existence of numerous non-equivalent ways of erasing computational content from propositions. These include:

1. Quotienting

2. Squashing (essentially, set comprehension on the unit type)

3. Double negation

Various theorems, when stated with one or the other will have different outcomes; continuity theorems are a nice example of where it pays to take these distinctions quite seriously.

(Thank you for posting my q, and for your reply.)

“… so that the outermost form is a negation.”

A version where the outermost form is not a negation

(arguably):

the set of the rational square roots of 2 is empty.

All the proofs I can find for the irrationality of the square root of 2 approach it by contradiction. Essentially they start: suppose there is a rational root; or use such-and-such formulation to construct the root. Then they proceed to a contradiction; or to show the formulation is infinite.

Speculation: a constructivist proof that something can’t be constructed must proceed by contradiction.

Contrast: proof that the root of 2 is not transcendental. Is direct (and trivial): we construct an algebraic number whose root is the solution.

> the set of the rational square roots of 2 is empty.

How do you define “is empty”? If you mean literally “_= {}”, then we can think about how we’d go about proving the original statement, { x ∈ ℚ | x² = 2 } = {}. We do this via set extensionality. Our goal is to prove ∀ x ∈ ℚ. x² = 2 ⇔ x ∈ {}. So, suppose x ∈ ℚ. Note that, by definition of {}, x ∈ {} is equivalent to ⊥ (false), so we need x² = 2 ⇔ ⊥. We want to prove the implication in both directions. Leftward is by explosion. Rightward, x² = 2 ⇒ ⊥ is exactly ¬ (x² = 2), and we prove it by assuming x² = 2 and deriving a contradiction in the usual way (this is the meat of the proof).

The key is that {} is a fundamentally negative thing, and can introduce negations without one noticing. The set language acts as syntactic sugar around the statement that takes effort to prove. Most of what I wrote above would be elided from a normal informal proof – we’d say “To prove that the set of rationals x such that x² = 2 is empty, we assume that x is rational and prove ¬ (x² = 2)…”.

Perhaps “outermost form is a negation” was the wrong thing to say, given that we can always add useless crap to the outside of a formula (A ⇔ ⊥ ∨ ⊥ ∨ … ∨ A ∨ … ∨ ⊥), but it doesn’t change that fact that we will prove the negation (by deriving a contradiction) as part of the proof. Whenever the outermost form is a negation, we are forced immediately to assume the body and derive ⊥ (assuming no use of lemmata). But, there are also statements containing negations where the negation is never proved, like ⊤ ∨ ¬ A.

Thanks @James W “The key is that {} is a fundamentally negative thing,”. Yes I expected someone would say that, which is why I added “(arguably)”.

And your point about “outermost form is a negation” is also well made.

So we seem to be lacking a criterion to tell whether a “proof by contradiction” — that is, that ends in a contradiction — is a kosher Constructivist proof. Because we can’t tell reliably whether a theorem “expresses a negation”.

> So we seem to be lacking a criterion to tell whether a “proof by contradiction” — that is, that ends in a contradiction — is a kosher Constructivist proof. Because we can’t tell reliably whether a theorem “expresses a negation”.

Are we talking formally or informally? Formally, the answer is simple: work in a constructive foundation, then all of your proofs will be constructive by construction ;-). Even working in a non-constructive foundation, we can inspect a formal proof and see whether it uses the law of excluded middle, double negation elimination, or whatever non-constructive rules we have.

Informally, intuition is gained by rigorous practice of working in a constructive setting, just as intuition about the real numbers is gained by doing analysis. Disjunction and existence feel more powerful, but negation neutralises that power. One becomes used to following the data around a proof, checking that none were manufactured in the middle. One relies on ideas of continuity and computability to give a sense of what can’t be proven.

I feel that I should say something else about the outermost form being a negation. The shape of a proof is highly dependent on the form of the proposition being proved. For example, if we’re trying to prove A ∧ B and we have no useful assumptions, we’re going to prove A, then prove B. Propositions have recursive structure, and so do proofs. A proof partially explores the syntax tree of the proposition. If the proof reaches a ¬A statement with no useful assumptions, we begin a proof of a negation: assume A and derive a contradiction. Then we carry on with the rest of the proof. If at any point we reach some A, and then say “assume ¬A […] contradiction”, we have done a proof by contradiction, which constructively only proves ¬¬A – a mismatch with A.

@James

> Disjunction and existence feel more powerful, but negation neutralises that power

I feel that this statement, like the post itself, is begging the question. Ultimately, it rests on a philosophical notion of what negation (and its “power”) means. With regards to A ⇒ ¬¬A and ¬¬A ⇒ A, there can be good philosophical and intuitive arguments for accepting both, or just one (either one!), and if we accept just one, we get a weaker logic that can obviously make more distinctions namely. But there can be valid arguments for classicism, constructivism, or paraconsistency, and none of them is more universally intuitive than the other. The “natural” intuition — if one exists at all — largely depends on circumstance and the intended use of the logic.

pressron, that was not supposed to be an argument at all. I was just describing what happens when I work in a constructive setting. Moreover, I don’t think this post is an argument for constructive mathematics – it is merely a clarification of what constructive mathematics is.

@James

I’m not saying you or the post are making an argument for constructivism, but that you’re begging the question by arguing that there is a “real” distinction between a direct and an indirect proof, and I think I can prove it:

A constructivist would say that a proof by contradiction in cases where LEM can be proven (say, on finite sets of naturals) is a constructive proof. He may even call it a direct proof, and give a nice reason why it’s so. It all really comes down to your views on infinity. A finitist, whether one of the constructivist or the formalist persuasion believes that there is a “real” difference between direct and indirect proof in infinitary settings (the formalist calls that distinction one between a “real” and an “ideal” proposition or proof). A Platonist, on the other hand, would say that there really is no distinction, and the same excuse you’d give for LEM in a finitary setting applies equally in infinitary settings as well. A Platonist would say that you’re being a hypocrite for drawing the distinction only in the case of proving existentials but not in the case of refuting universals, and you’re being “intellectually dishonest” by ignoring the distinction between a construction of a specific counterexample and merely “hinting” that one exists.

The Platonist would say that there is no real difference which one is then free to ignore, but an artificial distinction that you’ve made up at some arbitrary line that you’ve drawn at the computable. He would say that there is no “real” reason to draw the line there, because if there were, why not point out the “real” difference between constructive proofs of propositions with an intractable descriptive complexity and a constructive proof of tractable propositions as an ultra-finitist would? That is a difference which you’re then free to ignore but it would be intellectually dishonest to say it doesn’t exist.

> you’re begging the question by arguing that there is a “real” distinction between a direct and an indirect proof

No. Again, it is not an argument. The only way to compare classical and constructive mathematics is to only assume what is common to both. It just happens that what is common to both from a formal perspective is constructive mathematics. Thus, that is our language of choice for this discussion.

The distinction between constructive and nonconstructive proof is real in the sense that constructive mathematicians (and many conventional mathematicians) make a distinction. They may well be arbitrary in making that distinction, but that is not the subject of this discussion. The fact that the distinction is made by constructive mathematicians means that the distinction is already there, and we are just trying to work out precisely what they mean. Specifically, they don’t mean that no proofs can end at deriving a contradiction.

To continue your analogy, this post is like a post that points out that finitists do have a notion of natural number, contrary to an assumption that they don’t because there are infinitely many natural numbers. It may have have a little section justifying the finitistic approach, but the point of the post is to see what *can* be done finitistically, compared to what can be done classically, say. It is an investigation (under the assumptions of finitism), not an argument.

@James

I agree with everything you’ve now said, but that is not how I read the post, which contains statements like “of course one proves negative statements by contradiction!” (paraconsistency may require something stronger), “It is entirely a matter of intellectual honesty” (only if intellectually you accept the premise), “’oh that’s so cheap.’ Precisely.” (yet no cheaper than accepting non-constructive refutations) and “anyone is free to ignore the distinction for the sake of convenience” (or to make it up for the sake of convenience).

I guess that what bothered me was the tone, which I read as judgmental and as expressing an attitude towards logic that Wittgenstein said invites needless dogmatism and dispute (he was referring to positions held by some logicians regarding the foundation of mathematics). There’s already more than enough needless dogmatism and dispute over formalism in CS as it is.

I think that the important point about the distinction between proof by contradiction and a proof that ends in contradiction in constructive settings could be made more, well, constructively, by pointing out that there are important circumstances where we wish to make the distinction and important circumstances where we wish not to make it (and it doesn’t mean we’re ignoring it, just that it’s not there in those cases).

I wonder whether you may be attacking this from the wrong angle.

According to Wikipedia, proof by contradiction of Q (of any form) is: “assume ¬Q and derive a contradiction”. But you are hinting at a deeper distinction than just the top-level form of the proof: Something I’ll call “fundamentally constructive proofs” vs “fundamentally non-constructive proofs”, based on whether or not you can trivially rewrite them not to use the axiom of excluded middle (or double negation elimination).

For example, we can certainly pour Euclid’s proof into “proof by contradiction” form, where Q is ¬P, so we start by assuming ¬¬P, eliminate the double negation and proceed. But it can be trivially rewritten, so really it’s still “fundamentally constructive”.

I’d love to see a deeper treatment of this topic, because although I can intuit the difference (between e.g., your example ‘irrational exponentiation’ proof and direct proof by stating two example irrationals), I’m not sure how I would draw a formal distinction. In other words: Given a proposition that’s provable in a constructive proof system, (how) can a formal distinction be drawn between a “fundamentally constructive proof” and a “fundamentally non-constructive proof”. How trivial does the rewrite have to be?

(PS, probably a typo: “that there are two rationals whose exponent is rational”)

As you are aware – you were the first reply to it – Andrej Bauer pointed out this confusion and its commonness just under seven years ago: http://math.andrej.com/2010/03/29/proof-of-negation-and-proof-by-contradiction/

My impression is that there also seems to be a (false) air of “sophistication” of a (real) proof by contradiction held by students. This leads to overuse of proof by contradiction, sometimes ridiculously so. It doesn’t strike me as a particularly uncommon that a student will prove some proposition P via the following logical structure: “Assume not P. Then because (proceeds to prove P without the assumption), we have a contradiction. Therefore P.” Part of my interpretation of this is that I think many students think this is “how mathematician’s talk” and that this is what a proof “sounds like”.

I’m much more confident that few students are taught the downsides of proof by contradiction, and the fact that it certainly can make many proofs significantly shorter – which is unsurprising in light of the lost information – it comes across as more elegant.

Ah, sorry for the redundancy. This came up because a friend pointed out that her son was being taught nonsense about this at a top CS department. Further investigation reveals it is all too common at other top departments as well. The biggest critics of logic in cs seem to be the ones who know the least about the subject.

The redundancy is certainly not something to be sorry about. I personally view this as a very significant problem in math education, and the more it’s talked about the better. In response to Andrej’s post, Timothy Gowers did acknowledge that he had this confusion: https://gowers.wordpress.com/2010/03/28/when-is-proof-by-contradiction-necessary/#comment-6984

If Timothy Gowers can make this mistake, then it’s quite believable that many mathematics professors are making the same mistake and at least implicitly teaching it to their students. I’m not sure how to effectively combat this issue. The good news is most people I’ve seen who’ve been referred to Andrej’s post, do seem to grasp it and agree that it is a significant difference. This suggests that the problem is primarily one of awareness, so anything that raises awareness is a good thing.