The Christian doctrine of trinitarianism states that there is one God that is manifest in three persons, the Father, the Son, and the Holy Spirit, who together form the Holy Trinity. The doctrine of *computational trinitarianism *holds that computation manifests itself in three forms: proofs of propositions, programs of a type, and mappings between structures. These three aspects give rise to three sects of worship: Logic, which gives primacy to proofs and propositions; Languages, which gives primacy to programs and types; Categories, which gives primacy to mappings and structures. The central dogma of computational trinitarianism holds that Logic, Languages, and Categories are but three manifestations of one divine notion of computation. There is no preferred route to enlightenment: each aspect provides insights that comprise the experience of computation in our lives.

Computational trinitarianism entails that any concept arising in one aspect should have meaning from the perspective of the other two. If you arrive at an insight that has importance for logic, languages, and categories, then you may feel sure that you have elucidated an essential concept of computation—you have made an enduring scientific discovery. Advances in our understanding of computation may arise from insights gained in many ways (any data is useful and relevant), but their essential truth does not depend on their popularity.

Logic tells us what propositions exist (what sorts of thoughts we wish to express) and what constitutes a proof (how we can communicate our thoughts to others). Languages (in the sense of programming) tells us what types exist (what computational phenomena we wish to express) and what constitutes a program (how we can give rise to that phenomenon). Categories tell us what structures exist (what mathematical models we have to work with) and what constitutes a mapping between them (how they relate to one another). In this sense all three have *ontological *force; they codify what *is*, not how to describe what is already given to us. In this sense they are *foundational*; if we suppose that they are merely *descriptive*, we would be left with the question of where these previously given concepts arise, leading us back again to foundations. It is the foundations that I wish to describe here, because I believe it will help to clarify some common misunderstandings about the notions of proposition, type, and structure. Of particular interest here is that a “type system” is *not*, under this conception, an arbitrary collection of conditions imposed on a previously given notion of program (whether written with horizontal lines, or not). It is, rather, a way to say what the programs are in the first place, and what they mean as proofs and as mappings.

Here I will outline the basic correspondences between logic, languages, and categories by examining their *structural properties *(and, for now, nothing more).

The fundamental notion in logic is that of *entailment*, written , expressing *derivability* of from . This means that is derivable from the rules of logic, given the as *axioms*. In contrast to *admissibility *(which I will not discuss further here) this form of entailment *does not* express implication! In particular, an entailment is *never* vacuously true. Entailment enjoys at least two crucial *structural properties*, making it a pre-order:

.

In addition we often have the following additional structural properties:

.

These state that “extra” axioms do not affect deduction; the “order” of axioms does not matter; “duplication” of axioms does not matter. (These may seem inevitable, but in *substructural* logics any or all of these may be denied.)

In languages we have the fundamental concept of a *typing judgement*, written , stating that is an expression of type $A$ involving variables of type $A_i$. A typing judgement must satisfy the following basic structural properties:

We may think of the variables as names for “libraries”, in which case the first states that we may use any library we wish, and the second states closure under “linking” (as in the Unix tool ld or its relatives), with being the result of linking in to the library . Typically we expect analogues of the “extra”, “reordering”, and “duplication” axioms to hold as well, though this ain’t necessarily so. I will leave their formulation as an exercise for the reader.

In categories we have the fundamental concept of a *mapping* between *structures* and . The most elementary structures, perhaps, are sets, and mappings are functions, but it is more common to consider, say, that and are topological spaces, and $f$ is a continuous function between them. Mappings satisfy analogous structural properties:

These express, respectively, the existence of the identity map, and the closure of maps under composition. They correspond to reflexivity and transitivity of entailment, and to the library and linking rule of languages. As with types, one may expect additional closure conditions corresponding to the “extra”, “reordering”, and “duplication” axioms by giving suitable meaning to multiple assumptions. I will not go into this here, but numerous standard sources treat these conditions in detail.

What I find captivating about computational trinitarianism is that it is *beautiful*! Imagine a world in which logic, programming, and mathematics are unified, in which every proof corresponds to a program, every program to a mapping, every mapping to a proof! Imagine a world in which *the code is the math*, in which there is no separation between the reasoning and the execution, no difference between the language of mathematics and the language of computing. Trinitarianism is the *central organizing principle* of a theory of computation that integrates, unifies, and enriches the language of logic, programming, and mathematics. It provides a *framework for discovery*, as well as analysis, of computational phenomena. An innovation in one aspect must have implications for the other; a good idea is a good idea, in whatever form it may arise. If an idea does not make good sense logically, categorially, and typically (sorry for the neologism), then it cannot be a manifestation of the divine.

There are currently great strides being made in just this direction. I would recommend following the developments of homotopy type theory (https://homotopytypetheory.org/)

[…] The same fault could be found in Harper’s own favorite formalism, which he elucidates in The Holy Trinity (by the way, it is clear that by “computation” he means […]

Reblogged this on josephdung.

I find this sentiments on “Trinitarianism” disturbing. Somebody will read it and follow this mysticism. We need more science less mystics, less mantras. There are at least 5 corresponding fields of science: topology, quantum mechanics, logic, type theory, category theory. Google for rosetta stone research paper. Also make sure to check Curry, Howard and Lambek (note for readers, I suppose author knows them, but somehow forgot to mention). Most of papers available for free online.

Have a sense of humor ….

[…] video of the diagrams I was using to illustrate the connections between my reading of Kant and computational trinitarianism. Moreover, I can now see that what I was saying about co-inductive types is not quite right, […]

[…] Suggested Reading: Immanuel Kant, ‘Introduction’, Critique of Pure Reason (https://ebooks.adelaide.edu.au/k/kant/immanuel/k16p/introduction.html); Per Martin-Lof, ‘Analytic and Synthetic Judgments in Type Theory’ (http://archive-pml.github.io/martin-lof/pdfs/Martin-Lof-Analytic-and-Synthetic-Judgements-in-Type-Theory.pdf); Pei Wang, ‘Artificial General Intelligence: A Gentle Introduction’ (https://sites.google.com/site/narswang/home/agi-introduction); Robert Harper ‘The Holy Trinity’ (https://existentialtype.wordpress.com/2011/03/27/the-holy-trinity/) […]

[…] Harper refers to the inter-referential nature of type theory, proof theory, and category theory as computational trinitarianism. To quote […]

[…] by Bob Harper’s recent postings, I too have a confession to make. I know what is morally right; but sometimes the […]

[…] I never thought of it this way […]

If we use a common framework for both mathematics and computation, then the former becomes part of the latter. The way mathematics looks should then be dependent on its computational model. If we consider mathematics an abstraction, switching the underlying computational model would be an elegant way to tell whether something can be expressed in that model and, if so, how efficiently it can be evaluated.

Sometimes, we will be interested in purely mathematical results, in which case, only expressiveness should be relevant. An example would be defining a computational model. We do, however, get “for free” how efficiently that new model can be simulated in terms of the old.

[…] The Holy Trinity In the process of making plans for actually going to graduate school, I’ve been spending some thinking about what I want to research and what motivations and goals are. Apart from the technical things I’m interested in, I’m starting to believe that what we need more than ever is a “philosophy of computation” — ideas and concepts that define computation and our relationship to it at a higher level. Robert Harper’s recent blog post is a milestone on that journey. […]

The functional programming languages are again becoming popular and your blog deals with the CS behind them. What is your recommendation for a programmer in the industry who wishes to learn the science behind such languages ? Attending a CS course in a college might not be possible for working people. What else can one do ?

Even to understand your blog entries fully one requires a strong foundation in CS.

If you’re willing to jump in at the deep end, read my

Practical Foundations for Programming Languages. For a more practical introduction, you might consult myProgramming in Standard ML. There are tons of resources for Haskell and O’Caml, which I highly recommend; a quick search should turn up a lot of references.Computer Scientists are often caught up in the seduction of types and categorizations (as your argument in “Dynamic languages are static languages”).

I would argue the reverse, that “Static languages are dynamic languages”, because the type is a de facto unit test performed by the compiler. This means that the single type of the dynamic language is a simplification, because types can not replace unit testing (IMO).

A language with dynamic run-time features and unit testing at compile-time, would provide the best of both worlds.

It’s not a matter of methodology or opinion, it’s a matter of fact. See PFPL for technical details.

Im a bit confused – you say yourself that in a one type system you have to do the type checking yourself – and if you read my post Im not really arguing against your statement.

No matter how beautiful types are (and I dont disagree with that), I have come to the startling insight that they are merely compile time unit tests. Lambda calculus is also very beautiful from a mathematical standpoint and I share your exitement about them, however Im heading down a different road, which I would like to discuss with someone but your religion analogy apparently goes deep because everyone thinks functional lazy lambda calculus is THE ONE WAY.

A totally reductionist view would state that there IS only one type and it is the boolean; everything else is just compositions and conventions.

Ah, well – now im just waiting for your “Lambda Calculus is THE ONE WAY – its a fact” – reply :-)

My point is technical: static typing is the prior notion; dynamic typing is a mode of use of static typing (in which one focuses on one type, for some reason). One can retrofit a dynamic language with a system of predicates governing the behavior of expressions of dynamic type, but this is not a static type discipline in my sense. (That was already present in the very definition of the dynamic language.) On this point there can be no dispute; it’s not a matter of opinion.

Methodologically, one may reasonably argue about the role of dynamic typing (that is, of a particular static type of dynamically classified values) relative to the role of a richer concept of typing (that embraces classified values as but one static type of many). It is clear to me, at least, that the benefits of admitting more than one type are enormous. In particular, a static type discipline is fundamental to modular decomposition of programs (it’s how you state the api for the separate code that may not even exist at the moment you compiler your code). Once you take the catholic view that I advocate, you quickly realize that dynamic classification is occasionally useful, but is hardly the most important thing around. If, for example, you want to “get down to the metal”, eg give access to machine integers as such, not classified as numbers, then you have no choice but to use a static type discipline. And once you realize that type inference eliminates the bureaucracy of static typing (contrary to what one sees in Java, for example), then you realize that static types are your friend, and that you rarely have need of dynamic classifcation—but it’s there if you want it, as a special case.

From a software engineering perspective, it´s interesting that all three views are needed.

In analysis and testing, it´s important to find out how input is mapped to output. In design, what thoughts we want to express. And in programming, what do we need to invoke, and how do we abstract that which we invoke.

Thanks.

another nice little story: Wadler, Proofs are Programs, 2000

(http://homepages.inf.ed.ac.uk/wadler/papers/frege/frege.pdf)

The trouble is that CS degrees have become a kind of gate for high-level programming jobs, except for grandfathered old farts like me (I don’t have any degrees, though I do have a descendant in the second degree). I enjoy the Classics very much, and am glad that people can study them, but I don’t want to live in a society in which the ability to compose Latin verses is a prerequisite for a mid-level government position.

What about machines ? Are they under structure ?

See my post on languages and machines please. Also the second half of my post on parallelism.

Machine models maybe important to hardware implementors, but they are of essentially no practical importance for programming. The λ-calculus is directly useful for programming. I agree, but I never thought I would read

this from a computer science professor. Thanks for the honesty!

Rafael

> “Machine models maybe important to hardware implementors, but they are of essentially no practical importance for programming.”

?!? This is totally untrue for many programming tasks. If having to take a TLB miss will cause you to miss your performance target (a recent real example), I assure you knowing exactly how your CPU implements page tables is extremely relevant. Likewise, try and implement a hypervisor without understanding the hardware you’re running on and/or simulating.

“I assure you knowing exactly how your CPU implements page tables is extremely relevant”

In language, a page table could be thought of as a type. Execution time and memory use could be seen as values given to the semantics of the language.

The knowledge of the machine you simulate is a knowledge of application domain…From a programming perspective, the relevant details of the machine you are running can be thought of as an augmentation to the basic grammar of its language. Is there something important about machines-unless they are part of the application

domain- that can´t be abstracted as a value and attached to the language (What Robert calls giving a cost semantics to it) ?

What knowledge about how the cpu implements page tables-not present in an augmented language-did you

need for the aforementioned task ? Knowing how it implements page tables seems to be a problem of semantics…

Current research by Guy Blelloch is concerned with precisely this: giving a cost semantics for a language that allows you to make useful predictions about locality (cache behavior) at the implementation level. It seems plausible to me that one can factor concerns, putting memory hierarchy effects into the “provable implementation” part of the setup, and providing a cost semantics that allows the programmer to understand the program’s interaction with memory hierarchies.

your second rule for type judgments does not hold for translucent sums… (M must be restricted to a value)

Such restrictions can be postulated in the general framework I’m sketching by adding the hypothesis “x_i value” for those variables (perhaps all) that must be bound to values. It doesn’t change my basic outline.

Bob,

I’m going to take your post seriously, at face value, and offer a different point of view.

The Map is Not the TerritoryYou appear to be not only a Trinitarian, but a Platonist: you appear to believe that the things we see in the real world are just imperfect manifestations of some ideal forms.

I propose a different view: programs and computations are things in the real world, and our job is to construct useful models of those things. Our models are useful insofar as they reflect the features of the world that are of interest, and provide predictions about the behavior of those real-world objects that are accurate within the bounds of our model.

This is the point of view of physics. A model in physics is always an approximate model of the real world, which yields accurate predictions if applied within the limits of the model. Thus Newtonian mechanics is a good model for objects that are not too small and not travelling too fast. If the objects are too small, you need quantum mechanics, and if they are travelling too fast you need relativistic mechanics. And if your objects are travelling too close to the beginning of the universe, then you need string theory or who-knows-what.

When we build a model, we must choose what aspects of nature we wish to model. We build a model to work in a certain domain and to make certain kinds of predictions. (As Will Clinger likes to put it: every good model is a lie. The art is in knowing what it’s lying about.)

An architect, for example, will use a variety of models to predict the behavior of the structure he or she is designing. He or she will make design drawings or even foam-core models to communicate issues of concern to the client. He/she will make detailed construction drawings to communicate issues of concern to the builder. The structural engineer will use a different set of models to predict whether the building will fall down. The seismological engineer will use a still different set of models. And during construction, somebody will X-ray the concrete and use yet another model to determine whether the concrete has set correctly.

Similarly, we can choose different models to reflect different aspects of computation, or to make either finer or coarser predictions about the behavior of a program. A Un(i)typed model can express certain propositions, and therefore make certain predictions, about the behavior of a program; a simply-typed model can make different predictions. These predictions are in general finer than those made by the Un(i)typed model, but at the expense of excluding some programs about which it can make no predictions at all. The same story applies to System F or your favorite dependent type theory.

Furthermore, other type systems and other program analyses make predictions that may be entirely different in kind from those suggested so far. You allude to this in your post with your mention of substructural logics.

So it seems entirely unfair to assert that a type system is, as you say, “a way to say what the programs are in the first place.”

I am not a philosopher but sometimes I play one on the Internet. Those with a better background in philosophy can set me straight.

I look forward to more discussion.

–Mitch

Hi Mitch, it seems to me that a big difference between computing and, say, physics is that in computing we can prove that our physical and hypothetical models are equivalent in many cases.

Assuming no defects in your machine, you can execute a certain program and get

exactlythe correct results, every time. (Let’s say the algorithm is deterministic.) If you send me your code, I can in theory deduce it’s output a priori. If I run it on my machine, the output will be the same. The “model” is reality.One thing I don’t understand is how, as you say, “type systems and other program analyses make predictions that may be entirely different in kind from those suggested so far.” It seems to me that either two programs are provably equivalent, or they’re not. Could you explain?

Anyway, I’m no pro at philosophy, but my main point is that our field is very special in that the hypothetical model and the physical reality are basically equivalent — that’s pretty cool, and also means that this theorizing applies directly to real programs.

Well put. Herb Simon described CS as a “science of the artificial”.

I don’t see the disagreement. I struggled a lot with the wording in that post. Perhaps I should have said that logic

codifiesprinciples of reasoning, to make clear that it is not a once-and-for-all, finished classification. I believe that as we explore and discover new computational phenomena we should subject them to the rigors of being seen as logical, programming, and mathematical concepts in order to clarify them and isolate the core contribution (if any).My point about typed and un(i)typed is simply this: many people seem to believe that untyped languages are somehow in opposition to typed languages, and that a type system is something that can be bolted on after the fact onto a so-called untyped language. Both of these claims are simply false. The only correct view is that computation comes to us in typed form, with some types being perhaps more interesting to some people than to others. One can choose to be myopic, and consider only one type, but I argue that this is idiotic, born of ideological attachments and plain old ignorance. It is much better to understand the whole context in which such languages arise, because then one sees that (a) there is no opposition, and (b) it is natural and productive to embrace the existence of more than one type.

In the real world, machines do not obey any “theoretical specification”. See for example one of the CS papers on the *actual* failure models of disk drives. Firmware bugs can do weird things…

The entire subfield of CS dealing with fault tolerance is concerned with these sort of issue.

If you need to be concerned with your programs’ performance — even within an order of magnitude — this can be intractable to understand from the program source using traditional models. (E.g., nasty effects due to prefetch branch table thrashing.)

Shouldn’t that be M/y ?

If programming is equivalent to proving, it is no longer programming, but something else, just as geometry is not land-surveying, but something else. I have no trouble with people wishing to study the something else, but I fail to see that it’s a good, never mind the only, preparation for a career with chain and theodolite.

CMU’s SCS is not in the business of training grunt programmers (at least, it wasn’t when I worked there). It teaches Computer Science, which is largely a branch of mathematics. If you want programmer training, go to an evening class. If you want to understand computation, study Computer Science.

In fact, if I remember from a class with Bob some years ago, he explained (perhaps tongue-firmly-in-cheek) that mathematics was really a branch of computer science. I can’t remember the explanation he gave for that though; it probably involved category theory.

climatecode-

Yes, CMU teaches computer science, not grunt programming. But be careful about your claims. CMU by its very structure recognizes that CS is not a subdiscipline of math, but rather a discipline that combines mathematics with engineering and other fields. One of the great things about SCS is that it is its own school, not part of the college of engineering or the college of arts & sciences. It is thus a place where great mathematicians can work with great engineers, psychologists, linguists, biologists, and many interdisciplinary researchers, all on CS-related topics.

Reading the historical Carnegie Plan for Education (still summarized in the CMU undergraduate catalog), you’ll find an emphasis on “fundamentals useful in later learning” but also “pride in educating students who display excellence in application–students who can do useful things with their learning.” It takes both, and I think our curriculum viewed as a whole strives to deliver both immediate competency and knowledge of enduring value.