There has recently arisen some misguided claims about a supposed opposition between functional and object-oriented programming. The claims amount to a belated recognition of a fundamental structure in type theory first elucidated by Jean-Marc Andreoli, and developed in depth by Jean-Yves Girard in the context of logic, and by Paul Blain-Levy and Noam Zeilberger in the context of programming languages. In keeping with the general principle of computational trinitarianism, the concept of polarization has meaning in proof theory, category theory, and type theory, a sure sign of its fundamental importance.

Polarization is not an issue of language design, it is an issue of type structure. The main idea is that types may be classified as being *positive* or *negative*, with the positive being characterized by their structure and the negative being characterized by their behavior. In a sufficiently rich type system one may consider, and make effective use of, both positive and negative types. There is nothing remarkable or revolutionary about this, and, truly, there is nothing really new about it, other than the terminology. But through the efforts of the above-mentioned researchers, and others, we have learned quite a lot about the importance of polarization in logic, languages, and semantics. I find it particularly remarkable that Andreoli’s work on proof search turned out to also be of deep significance for programming languages. This connection was developed and extended by Zeilberger, on whose dissertation I am basing this post.

The simplest and most direct way to illustrate the ideas is to consider the product type, which corresponds to conjunction in logic. There are two possible ways that one can formulate the rules for the product type that from the point of view of inhabitation are completely equivalent, but from the point of view of computation are quite distinct. Let us first state them as rules of logic, then equip these rules with proof terms so that we may study their operational behavior. For the time being I will refer to these as Method 1 and Method 2, but after we examine them more carefully, we will find more descriptive names for them.

Method 1 of defining conjunction is perhaps the most familiar. It consists of this introduction rule

and the following two elimination rules

.

Method 2 of defining conjunction is only slightly different. It consists of the same introduction

and one elimination rule

.

From a logical point of view the two formulations are interchangeable in that the rules of the one are admissible with respect to the rules of the other, given the usual structural properties of entailment, specifically reflexivity and transitivity. However, one can discern a difference in “attitude” in the two formulations that will turn out to be a manifestation of the concept of polarity.

Method 1 is a formulation of the idea that a proof of a conjunction is *anything that behaves conjunctively*, which means that it supports the two elimination rules given in the definition. There is no commitment to the internal structure of a proof, nor to the details of how projection operates; as long as there are projections, then we are satisfied that the connective is indeed conjunction. We may consider that the elimination rules define the connective, and that the introduction rule is derived from that requirement. Equivalently we may think of the proofs of conjunction as being *coinductively defined* to be as large as possible, as long as the projections are available. Zeilberger calls this the *pragmatist* interpretation, following Count Basie’s principle, “if it sounds good, it is good.”

Method 2 is a direct formulation of the idea that the proofs of a conjunction are *inductively defined* to be as small as possible, as long as the introduction rule is valid. Specifically, the single introduction rule may be understood as defining the structure of the sole form of proof of a conjunction, and the single elimination rule expresses the induction, or recursion, principle associated with that viewpoint. Specifically, to reason from the fact that to derive , it is enough to reason from the data that went into the proof of the conjunction to derive . We may consider that the introduction rule defines the connective, and that the elimination rule is derived from that definition. Zeilberger calls this the *verificationist* interpretation.

These two perspectives may be clarified by introducing proof terms, and the associated notions of reduction that give rise to a dynamics of proofs.

When reformulated with explicit proofs, the rules of Method 1 are the familiar rules for ordered pairs:

.

The associated reduction rules specify that the elimination rules are post-inverse to the introduction rules:

.

In this formulation the proposition is often written , since it behaves like a Cartesian product of proofs.

When formulated with explicit proofs, Method 2 looks like this:

with the reduction rule

.

With this formulation it is natural to write as , since it behaves like a tensor product of proofs.

Since the two formulations of “conjunction” have different internal structure, we may consider them as *two different connectives*. This may, at first, seem pointless, because it is easily seen that for some and that for some , so that the two connectives are logically equivalent, and hence interchangeable in any proof. But there is nevertheless a reason to draw the distinction, namely that they have different dynamics.

It is easy to see why. From the pragmatic perspective, since the projections act independently of one another, there is no reason to insist that the components of a pair be evaluated before they are used. Quite possibly we may only ever project the first component, so why bother with the second? From the verificationist perspective, however, we are pattern matching against the proof of the conjunction, and are demanding both components at once, so it makes sense to evaluate both components of a pair in anticipation of future pattern matching. (Admittedly, in a structural type theory one may immediately drop one of the variables on the floor and never use it, but then why give it a name at all? In a substructural type theory such as linear type theory, this is not a possibility, and the interpretation is forced.) Thus, the verficationist formulation corresponds to *eager* evaluation of pairing, and the pragmatist formulation to *lazy* evaluation of pairing.

Having distinguished the two forms of conjunction by their operational behavior, it is immediately clear that both forms are useful, and are by no means opposed to one another. This is why, for example, the concept of a *lazy language* makes no sense, rather one should instead speak of *lazy types*, which are perfectly useful, but by no means the only types one should ever consider. Similarly, the concept of an *object-oriented language* seems misguided, because it emphasizes the pragmatist conception, to the exclusion of the verificationist, by insisting that everything be an object characterized by its methods.

More broadly, it is useful to classify types into two *polarities*, the *positive* and the *negative*, corresponding to the verificationist and pragmatist perspectives. Positive types are *inductively defined* by their introduction forms; they correspond to *colimits*, or *direct limits*, in category theory. Negative types are *coinductively defined* by their elimination forms; they correspond to *limits*, or *inverse limits*, in category theory. The concept of polarity is intimately related to the concept of *focusing*, which in logic sharpens the concept of a cut-free proof and elucidates the distinction between synchronous and asynchronous connectives, and which in programming languages provides an elegant account of pattern matching, continuations, and effects.

As ever, enduring principles emerge from the interplay between proof theory, category theory, and type theory. Such concepts are found in nature, and do not depend on cults of personality or the fads of the computer industry for their existence or importance.

*Update:* word-smithing.