The Power of Negative Thinking

Exception tracking is a well-known tar baby of type system design.  After all, if expressions can have two sorts of result, why shouldn’t the type say something about them both?  Languages such as CLU, FX, and Java, to name three, provide “throws” or “raises” clauses to the types of procedures that specify an upper bound on the exceptions that can occur when they are called.  It all seems natural and easy, but somehow the idea never really works very well.  One culprit is any form of higher-order programming, which is inherent in object-oriented and functional languages alike.  To handle the indirection requires more complex concepts, such as effect polymorphism, to make thing work reasonably well.  Or untracked exceptions are used to avoid the need to track them.  Somehow such an appealing idea seems rather stickier to realize than one might expect.  But why?

A piece of the puzzle was put into place by Xavier Leroy and François Pessaux in their paper on tracking uncaught exceptions. Their idea was to move use type-based methods to track uncaught exceptions, but to move the clever typing techniques required out of the programming language itself and into a separate analysis tool.  They make effective use of the powerful concept of row polymorphism introduced by Didier Rémy for typing records and variants in various dialects of Caml.  Moving exception tracking out of the language and into a verification tool is the decisive move, because it liberates the analyzer from any constraints that may be essential at the language level.

But why track uncaught exceptions?  That is, why track uncaught exceptions, rather than caught exceptions?  From a purely methodological viewpoint it seems more important to know that a certain code fragment cannot raise certain exceptions (such as the \texttt{match} exception in ML, which arises when a value matches no pattern in a case analysis).  In a closed world in which all of the possible exceptions are known, then tracking positive information about which exceptions might be raised amounts to the same as tracking which exceptions cannot be raised, by simply subtracting the raised set from the entire set.  As long as the raised set is an upper bound on the exceptions that might be raised, then the difference is a lower bound on the set of exceptions that cannot be raised.  Such conservative approximations are necessary because a non-trivial behavioral property of a program is always undecidable, and hence requires proof.  In practice this means that stronger invariants must be maintained than just the exception information so that one may prove, for example, that the values passed to a pattern match are limited to those that actually do satisfy some clause of an inexhaustive match.

How realistic is the closed world assumption?  For it to hold seems to require a whole-program analysis, and is therefore non-modular, a risky premise in today’s world.  Even on a whole-program basis exceptions must be static in the sense that, even if they are scoped, they may in principle be declared globally, after suitable renaming to avoid collisions.  The global declarations collectively determine the whole “world” from which positive exception tracking information may be subtracted to obtain negative exception information.  But in languages that admit multiple instantiations of modules, such as ML functors, static exceptions are not sufficient (each instance should introduce a distinct exception).  Instead, static exceptions must be replaced by dynamic exceptions that are allocated at initialization time, or even run-time, to ensure that no collisions can occur among the instances.  At that point we have an open world of exceptions, one in which there are exceptions that may be raised, but which cannot be named in any form of type that seeks to provide an upper bound on the possible uncaught exceptions that may arise.

For example consider the ML expression

  exception X
  raise X

If one were to use positive exception tracking, what would one say about the expression as a whole?  It can, in fact it does, raise the exception \texttt{X}, yet this fact is unspeakable outside of the scope of the declaration.   If a tracker does not account for this fact, it is unsound in the sense that the uncaught exceptions no longer provide an upper bound on what may be raised.  One maneuver, used in Java, for example, is to admit a class of untracked exceptions about which no static information is maintained.  This is useful, because it allows one to track those exceptions that can be tracked (by the Java type system) and to not track those that cannot.

In an open world (which includes Java, because exceptions are a form of object) positive exception tracking becomes infeasible because there is no way to name the exceptions that might be tracked.  In the above example the exception \textsf{X} is actually a bound variable bound to a reference to an exception constructor.  The name of the bound variable ought not matter, so it is not even clear what the exception raised should be called.  (It is amusing to see the messages generated by various ML compilers when reporting uncaught exceptions.  The information they provide is helpful, certainly, but is usually, strictly speaking, meaningless, involving identifiers that are not in scope.)

The methodological considerations mentioned earlier suggest a way around this difficulty.  Rather than attempt to track those exceptions that might be raised, instead track the exceptions that cannot be raised.  In the above example there is nothing to say about \texttt{X} not being raised, because it is being raised, so we’re off the hook there.  The “dual” example

  exception X

illustrates the power of negative thinking.  The body of the \textsf{let} does not raise the exception bound to \textsf{X}, and this may be recorded in a type that makes sense within the scope of \textsf{X}.  The crucial point is that when exiting its scope it is sound to drop mention of this information in a type for the entire expression.  Information is lost, but the analysis is sound.  In contrast there is no way to drop positive information without losing soundness, as the first example shows.

One way to think about the situation is in terms of type refinements, which express properties of the behavior of expressions of a type.  To see this most clearly it is useful to separate the exception mechanism into two parts, the control part and the data part.  The control aspect is essentially just a formulation of error-passing style, in which every expression has either a normal return of a specified type, or an exceptional return of the type associated to all exceptions.  (Nick Benton and Andrew Kennedy nicely formulated this view of exceptions as an extension of the concept of a monad.)

The data aspect is, for dynamic exceptions, the type of dynamically classified values, which is written \textsf{clsfd} in PFPL.  Think of it as an open-ended sum in which one can dynamically generate new classifiers (aka summands, injections, constructors, exceptions, channels, …) that carry a value of a specified type.  According to this view the exception \textsf{X} is bound to a dynamically-generated classifier carrying a value of unit type.  (Classifier allocation is a storage effect, so that the data aspect necessarily involves effects, whereas the control aspect may, and, for reasons of parallelism, be taken as pure.)  Exception constructors are used to make values of type \textsf{clsfd}, which are passed to handlers that can deconstruct those values by pattern matching.

Type refinements come into play as a means of tracking the class of a classified value.  For the purposes of exception tracking, the crucial refinements of the type \textsf{clsfd} are the positive refinement, \textsf{a}\cdot, and the negative refinement\overline{\textsf{a}\cdot}, which specify that a classified value is, or is not, of class \textsf{a}.  Positive exception tracking reduces to maintaining invariants expressed by a disjunction of positive refinements; negative exception tracking reduces to maintaining invariants expressed by a conjunction of negative refinements.  Revisiting the logic of exception tracking, the key is that the entailment

\overline{\mathsf{a}_1\cdot}\wedge \cdots\wedge \overline{\mathsf{a}_n\cdot} \leq \overline{\mathsf{a}_1\cdot} \wedge \cdots \wedge \overline{\mathsf{a}_{n-1}\cdot}

is valid, whereas the “entailment”

\mathsf{a}_1\cdot \vee \cdots \vee \mathsf{a}_n\cdot \leq \mathsf{a}_1\cdot\vee \cdots \vee \mathsf{a}_{n-1}\cdot

is not.  Thus, in the negative setting we may get ourselves out of the scope of an exception by weakening the refinement, an illustration of the power of negative thinking.


11 Responses to The Power of Negative Thinking

  1. morrisett says:

    In the program analysis world, one would give static names to sets of dynamic exception values, and then track those static names. For instance, if we viewed an exception declaration as consisting of a (non-alpha-varying, constant) name/string component, and a normal variable, then the analysis could proceed by using the names, and not the variables. The interpretation of “throws {N1,…,Nn}” would be an expression that may throw any exception value v, generated by declarations labelled N1 or … or Nn.

    Of course, the rules would have to respect those abstractions. For instance, catching just one exn in the set named by N does not mean you can remove N (as some other exn in N might be thrown.)

    Similar things are done for pointer analysis where we, for instance, use the program location of the allocation site as a “name”, and use that name to represent the set of all dynamic pointers allocated at that site.

  2. fpoling says:

    In view of that what is your opinion about PureScript type system which allows to add and *subtract* effects from a computation thus typing nicely throw/catch?

    In the example in [1] the throwError function adds the Error effect to the computation and catchError subtract it from a set of all effects. Thus the compiler can prove that after catchError the exceptions no longer thrown and the result can be passed to a pure code.

    [1] –

    • Sounds good, but is it sound? Subtracting positive information can be unsound, as I’ve illustrated.

    • pyon says:

      Sorry, fpoling, this is more of a response to Robert Harper’s objection than to your original comment.

      I think you could have a distinction between “speakable vs. unspeakable effects”, where only speakable effects can be handled (and thus subtracted). Unspeakable effects would “irreparably taint” a computation.

      However, in ML as it currently exists, we have the problem that it is possible to write a single exception handler that catches everything. (Which, at least in my opinion, is not a terribly good idea.)

    • Exception handlers are instances of pattern matching against values of type exn, which is open-ended. It is not possible to avoid a catch-all handler for such a type.

    • pyon says:

      Then the open-ended exn type is itself the problem. You said yourself in a previous post that exceptions are shared secrets between a raiser and a handler, an idea with which I completely agree. However, for this to really work, we must make sure that exceptions won’t be caught by computations that can’t properly handle them. Sadly, in ML, raising exceptions isn’t sharing a secret with another interested party. It’s leaving a message in a bottle, tossing the bottle into a river, and hoping that whoever finds the bottle and reads the message is capable of understanding it.

      The way Bauer and Pretnar’s Eff language handles effects seems to me like the Right Way ™ to do it: exceptions are dynamically constructed effects (as in ML), and they can only be caught by those aware of them (unlike ML).

    • They cannot be intercepted in any meaningful way; they are secrets that cannot be decoded. Open-endedness is essential to this property; it cannot be done away with.

    • fpoling says:

      Currently there is no proof that the PureScrit typesystem is sound. However, it is based on polymorphic effects in koka [1]. That is sound and also allow to have a private exception-like effect that can only be caught by a caller who has access to the type.

      The drawback is that between the throw point and the catch all computations has to have polimorphic effect type like ∀ effect_set: , but in many cases the compiler can infer this.

      [1] –

  3. You argue that tracking which exceptions are
    certainly *not* raised is easier than tracking
    which exceptions may be raised. While that may be
    correct, I think the usefulness should be
    debated. What kind of user feedback do either
    analyses allow? Tracking the exception which are
    not raised allows to warn the programmer that
    a specific exception-catch clause is
    useless/redundant; tracking exceptions may be
    raised allows to warn the programmer that a given
    expression may not terminate because of an
    exception. I personally feel that the latter
    report is much more useful and important in actual
    practice than the former — it is about “not going

    I think the right thing to do in your
    local-exception example (if you track
    may-be-raised exceptions) is to reject the program
    at type-checking time, exactly as we do when an
    existential type variable leaks it scope. This is
    not harder to implement than the kind of tracking
    you perform (when an exception goes out of scope,
    fail instead of forgetting about it), but it
    certainly puts more pressure on the precision of
    the analysis — if it claims that local exceptions
    may escape too often, it makes them unusable. But
    that is a fair price to pay for a massively more
    useful type-level feedback. Maybe an explicit
    escape mechanism is warranted.

    Of course that doesn’t undermine the value of
    looking at both positive and negative information
    in the general case. I’m always curious about the
    work that tries not to disprove that a program may
    fail, but to prove that it must fail on some
    outputs (eg. “A falsification view of success
    typing”). I think that integrating both views in
    a formal system may provide nice benefits, such as
    keeping precision on one side for the programming
    constructs that are difficult to handle for the
    other. When done properly, both directions are

    • pyon says:

      I have an example of the usefulness of tracking which exceptions are not raised. Let me describe my use case.

      I am currently working on “dissecting” purely functional data structures, so that their common aspects can be factored out and put into a single reusable component. For example, instead of making a red-black tree functor and an AVL tree functor, both parameterized by a totally ordered type; I would implement a single ordered set functor, parameterized by a totally ordered type and a tree balancing strategy. Of course, red-black and AVL are examples of such balancing strategies.

      The signature for a balancing strategy exposes an exception that is only raised when the internal balance invariant of a tree is violated. However, the only direct consumer of these balancing strategies, namely, the ordered set functor, is itself supposed to *never* trigger this exception. I would very much like to make the types, both in the balancing strategies and in the ordered set functor, precise enough to let me express the fact that this exception shall never be raised. Or, even better, precise enough to render partial functions unnecessary.

      My code is here:

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: