Monday, May 9, 2011

A Lecture Supported by an Automated Theorem Prover

UPDATED: added note [1] on mutual implication

Last week on monday, I attended the guest lecture that Prof. Topias Nipkow from the Technical University of Munich (TUM) gave at ETH. I had heard of him before because of his work on the Isabelle theorem prover. An interesting thing before I start: he refers to it as a proof assistant.

I didn't expect much of this talk because of my reluctance to rely on automatic provers to do my job and because I expected a very technical talk about proof strategies implemented or to be implemented in Isabelle. Since such issues need concern only people involved in the construction of theorem provers and that I'm not, I didn't think it would appeal to me.

I was pleasantly surprised that he had chosen to talk about a course he had recently started to give on the topic of semantics. Said course was noteworthy because formal methods and an automated prover were used as teaching vehicles. I am pleased to see that he decided to use formal methods as a tool for understanding the topic and that he would opt for spending more time on semantics, the subject matter, and less on logic and formal methods.

It is my considered opinion, however, that a prerequisite course on the design of formal proofs would be most useful, even necessary. I draw this conclusion because I believe that acquiring effective techniques for designing elegant proofs can help tremendously in making hard problems easier. But, since he didn't want to teach logic, he gave very little attention to the design of proofs and adopted "validity of the proofs" as the goal the students' had to reach in their assignments. It was asked --by someone else before I could ask it-- what importance he gave to the style of the submitted proofs. He said none because some (in his opinion) very good students had an awful style and he didn't want to penalize them for it. He considered that, having submitted a correct proof, the student had shown that he understood. I would say that, in doing so, he was doing his students a disservice. However, I can now better understand his position because the underlying assumption is very popular: since words like style and elegance have a very strong esthetic connotation, it becomes automatically whimsical to judge them or to encourage the students to improve upon them. After all, we're here to do science, not to make dresses!

*                         *
*

Even when it has been successfully argued just how useful mathematical elegance can be, people keep opposing its use as a yardstick on the ground that it is too subjective and enforcing one standard would be arbitrary.

It turns out that, not only are elegance and style very useful, but they can be very objectively analyzed and criticized. It has been the preferred subject of Dijkstra in the second part of his career and it appears almost everywhere in his writing --see EWD619 Essays on the nature and role of mathematical elegance for such an exposition [0]--.

The usefulness of an elegant style come from the fact that it yields simple and clear proofs requiring very little mental work from the reader (which is not something to be underestimated) but, more importantly, it allows the writer of a proof to be economical in the use of his reasoning abilities. This can make the difference between a problem which is impossible to solve and one which is easily solved. On the other hand, in such a course as what Nipkow has designed, the most interesting things for students to learn are not the individual theorems that they proved but the techniques that they used to find a proof. If the techniques are not taught, it can be expected that the skills the students acquire will be of much less use when confronted to different problems.

What an elegant style boils down to is, to put it in Dijkstra's words, the avoidance complexity generators and the separation of one's concerns. He also argued that concision is an effective yardstick to evaluate the application of those techniques. Indeed, concision is effectively achieved by both separating one's concerns and avoiding complexity generators. The most well known complexity generators are case analyses, proofs of logical equivalence by mutual implication and inappropriate naming. The first two are legitimate proof techniques but their application double the burden of the proof. It doesn't mean that they should never be used but rather that they should be avoided unless the proofs of the many required lemmata differ significantly. [1] I say significantly to stress that, in some cases, they can differ in small ways and, upon closer scrutiny, the differences can be abstracted from.

The issue of naming is also closely related to the separation of one's concerns but, being a tricky issue, I would rather point the reader in the direction of van Gasteren's book "On the Shape of Mathematical Arguments" to the chapter 15 "On Naming" which covers the subject very nicely. Since Dijkstra was van Gasteren's supervisor, he wrote the chapter with her and it has earned it an EWD number: 958 [0]. This allows me to skip directly to the matter of separation of concerns.

*                         *
*

While debating about concision and about Nipkow's lecture with my supervisor, something kept coming up in his arguments. I was favoring short formal proofs and he kept asking: "What if a student doesn't want to call 'auto' [the function of Isabelle which can take care of the details of a proof step] but wants to go into the details to understand them?" First of all, I have to point out that being the input for an automated tool doesn't relieve a proof from the obligation of being clear. [2] Unlike Nipkow, I see that the proof must include intermediate formulae accompanied with a hint of how one proceeds to find them. This would correspond to the combination of what he calls a proof script --a series of hint without intermediate formulae-- and a structured proof --a series of intermediate formulae with little to no hints; this is what he prefers to use--. In that respect, 'auto' is no better than the hint "trivially, one sees ...". The choice of how early one uses 'auto' is basically related to decomposition. It is unrelated to the peculiarities of the prover but relies on how clear (to a human reader) and concise the proof is without the details. What my supervisor and Nipkow name "using auto later in the proof" would be, in a language independent of the use of an automated prover, including more details. If the proof is clear and concise without those details, they don't belong in the body of the proof. It is that simple. One could, however, include them in the proof of a lemma invoked in one (or many) proof steps of the main proof.

By Including a lemma, one doesn't destroy concision because the proof of said lemma can be included beside that of the main theorem rather than intertwined with it. The difference is that, while one reads a proof, each step must clearly take him closer to the goal. Any digression should be postponed so as not to distract the attention from the goal. One good indication that a lemma is needed is when many steps of a proof are concerned with a different subject than the goal. For instance, when proving a theorem in parsing theory, if many successive steps are concerned with predicate calculus, the attention is taken away from parsing. Instead, making the whole predicate calculation in one step and labeling it "predicate calculus" is very judicious. Nothing prevents the proof that pertains to predicate calculus to be presented later on, especially if it is not an easy proof.

The important point here is that, sticking to the subject at hand doesn't mean forgetting that there are other problems that need one's attention. It means dealing with one problem at a time each time forgetting momentarily what the other problems are. This is exactly what modularity is about.

Furthermore, with an automatic prover, nothing prevent someone to use 'auto' to commit the sin of omission; details that would make a step clear are then missing. It is then a matter of style to judge how much should be added. This reinforces my point that good style should be taught because clarity is the primary goal with proofs [3].

With respect to such a tool, I would welcome the sight of one where keywords like 'auto' are replaced by catchwords like 'predicate calculus' to hint at the existence of a simple proof --at most five steps-- in predicate calculus --in this case-- that supports the designated step. More often, we could use invocations like 'theorem 7' (or '(7)' for short) or 'persistence rule' as a way of invoking a very straightforward application of a referenced theorem. It is clear to the reader what is going on then and to the prover, the problem is very simple: it looks for a simple proof. If no proofs of at most five steps exist, the search fails. More importantly: the user should have foreseen it. The prover should never be used to sweep problems under the carpet.

Like with the type system of a programming language, it should be easy for the human reader to see that a given proof step is going to be accepted. It is by being predictable that automatic tools are useful, not by working magic.

*                         *
*

By way of conclusion, I go on to another aspects of Nipkow's talk. He said that applying formal proofs to the teaching of computer science is especially useful in those subjects where the formalization is close to people's "intuition". In the rest of the subjects, it is a bad idea. I say this is a drawback of his approach, not one of formal proofs. If you take formal proofs as a proper input format for the mechanized treatment of intuitive arguments, it seems inevitable to run into that problem. However, formalism can be used for purposes which have nothing to do with mechanization: the purposes of expressing and understanding precise and general statements. If it is used in this capacity, formalism allows a (human) reasoner to take shortcuts which have no counterpart in intuitive arguments. It is one of the strengths of the properly designed formalisms that you can use them to attain results which is beyond intuitive reasoning.

Case in point, the topics where Nipkow says formal proofs are not an appropriate vehicle for teaching are those where it would be crucial to rely on formalisms that are not merely the translation of intuitive reasoning. To use those, we would have to stop relying on our intuition and acquire the techniques of effective formal reasoning. This leads me back to my first point. 'Effective' means that we're not interested in finding just about any proof: we are looking for a simple and elegant one so that problems for which the solution would be beyond our abilities could admit a simple solution.

In other words, striving for simplicity is not a purist's concern as much as a pragmatic preoccupation that allows us to solve with as little efforts as possible problems that are beyond the unaided minds.

Simon Hudon
ETH Zurich
Mai 11th 2011

[0] For the EWD texts, see http://www.cs.utexas.edu/users/EWD/

[1] For those who are used to using deductive systems for formal reasoning, the question "what alternative do we have to mutual implication?" might have come up. The answer is that equivalence is a special case of equality and should be treated as such. That is to say that its most straightforward use is by "substituting equals for equals" also known as "Leibniz's rule". It is also the most straightforward way to prove an equivalence. The naming of equivalence by "bidirectional implication" is a horrible deformation. It is analogous to calling equality in numbers "bidirectional inequality": it hints at a way of proving it using implication but does not distinguishes between this shape of one possible proof and the theorem and, indeed, I realize that some people immediately think of mutual implication when they see an equivalence. It's a shame.

[2] In this respect, theorem provers seem to be more primitive than our modern programming languages. Whereas they become more and more independent of their implementation to embody abstractions (for instance, in Java, there no longer is a "register" keyword for variables), the proof languages of automated provers shamelessly include a lot of specific command for what I shall call "sweeping the rest of the problem under the rug". In proofs explicitly constructed for human readers, those would be replaced by vague expressions like "well you know..." followed, if presented in a talk, by a waving of the hands intended to say "anyone but idiots will get this".

[3] This goes against what seems like a school of thought that views formal proofs as the input of tools like theorem provers. It is easy for people of that school to draw the fallacious analogy with assembler programming. The important difference is that a proof designed with style can be the vehicle of one's understanding. Since mechanisms like abstraction are routinely applied to make proofs as simple as they can be, if one wants to understand an obscure and counter intuitive theorem, an elegant formal proof is very likely to be the best way to do it. On the other hand, assembler is not a language which helps one understand an algorithm. It is clearly an input format of a microprocessor and people should treat it as such.

Special thanks to my brother Francois for applying his surgical intellect to the dissection of the objections made against the notions of elegance and style and to Olivier who also help me improve this text.