Tuesday, October 20, 2009

Teaching

For some reason, since the weekend I'm pretty down and yesterday's events did improve anything. I had an exercise session in Software Verification and the exercises were to be done with pen and paper which suited me fine since I don't know much about separation logic and I am curious to see how convenient it is to use it to solve day to day programming problems. First disappointment: instead of presenting a collection of usable manipulation rules and theorems, he gave us an intuitive notions of what the logic is about and expected that it would be enough to solve the problem. This defeats the whole purpose of using formal logic and I equate with no less than giving up the teaching of the topic. To my liking, something good enough for the exam but unusable afterwards is a total failure of teaching Second disappointment: we had to write a program with a loop and prove its partial correctness. He mentioned techniques of weakening the postcondition to obtain a loop invariant but he said that he would not teach it since they were not likely to have to find an invariant at the exam. Instead, he suggested to (with no alternative, we could say imposed on) the students to execute their loop on paper and, through guesswork, to find the relation that is preserved throughout the iterations. This is at best a very clumsy way of finding a loop invariant and I am sure that, for many students, it has driven home the message that formal methods is no more than an academic exercise instead of something useful that they can use when dealing with hard problems. And it is not like he did not know that they are very useful for the direct derivation of programs or that he thinks that it is inefficient problem solving. The only conclusion left is that he thinks students are not up to it. That's sad to have such a poor esteem of the people that came all that way to learn something. And I don't mean that I'm the only one victim of that practice (actually, I don't consider myself a victim at all since I think lectures are only for administrative purposes; my true learning is done on my own time with books and smart people) but I think of the people that worked all through their bachelors to end up with some farce of a lecture. Even though I am aware that not everybody is as motivated as me for learning new material, I would think that the time spent there could be better spent doing something else. I feel better already, having said that! Simon

Monday, October 19, 2009

The LaTeX Syndrom

Reading a certain paper for my seminar of software engineering, I had nausea when seeing the appalling notation they chose to talk about the soundness of their system. My supervisor told me it was of no importance but still I can't help but think that the notation could be improved a lot. They seem to have chosen to use a formalism in order to pack as much information as possible in one page and I am convinced that they must not have used it much with pen and paper because it would have been much to heavy. This leads me to the hypothetical LaTeX syndrome. I think they needed LaTeX to typeset such a notation and would not be able to do it without LaTeX. I think in the design of a notation, it should be taken as a yardstick that the notation is usable with pen and paper and it must not be a torture to use it for any length of time. I would go one step further and state that I would like those that do not follow this principle to be condemned to use their formalism on paper for as long as necessary for them to repent. I could also ask for a reward for the people that do abide to the principle: they could have the joy of using their formalism and see their problems nicely stated and elegantly solved with it. I am not making a statement against LaTeX here, at least not yet. What I am complaining about is the people that use it without the application of good judgement. As a reader, I think the bad formalism was an important difficulty in seeing the point they used it for. Fortunately for me, they did not use it too much. I must admit that I now avoid LaTeX altogether since most of my efforts are spent on LaTeX itself rather than on the problem at hand. I have started using rich text editors which don't pretend to be smart. The big drawback is that they are harder to use with version control. For that reason, I'm thinking of using plain text but, for some reason, Mac OS does not make that easy. Simon Hudon Written after a bad day on October 19th 2009 Meilen

Tuesday, October 13, 2009

"For someone whose only tool is a hammer, every problem looks like a nail" (EWD 690)

This note is about paradigms. In short, I would define a paradigm as being the belief that a particular abstraction or theory fits a certain context. We can see, for example, in programming, the paradigm of object oriented programming as being the belief that focusing one's analysis and design efforts on the relations between different pieces of data is an efficient way to cope with the variation in requirements. Also, we can see that applying game theory to economic behavior gives rise to the paradigm that we might call "the rational choice". I think a paradigm is a very useful belief to have because it allows one to deal in simple terms with problems that might get complicated. In my opinion, a healthy use of a paradigm is like controlled schizophrenia: when you deal with a problem within the realm of applicability of the paradigm, it is useful and often necessary to assume its postulate without questions. This is very efficient when dealing with related problems. However, one should be able to defend the use of the paradigm and more specifically, should consciously stop believing in the postulates while doing so. I find it frustrating when discussing with somebody when I realize that, not only can I identify the paradigm he adopts (which is not bad) but I realize that he is unable to step out of it even after several direct attacks at his assumptions. This happens both in technical discussions about computing and in humanities. On the other hand, it is also fruitless to discuss hard problems with somebody who can't assume the postulates once they have been defended. In technical terms, this might be seen as somebody that keeps reasoning about a method call in terms of the instructions that gets executed as a substitute instead of reasoning in terms of an abstract specification. In the first case, the person is hardly schizophrenic at all (what a fraud!) and in the second case the schizophrenia is uncontrolled so the person's reasoning is all over the place: he can't focus on one aspect of his problems at a time because he can't see a clear separation between the two. As a second example of uncontrolled schizophrenia, I see some people rely strongly on set theory and have to understand everything in terms of set. Some of them for example, can't bear to use sequences for their reasoning because the proofs with sequences are so complicated and messy. This is a recent conviction of mine but I keep seeing examples where having a clear and simple abstraction would be much more effective. In the case of sequences, we need a set of postulates that describe sequences as objects of their own right and don't drag sets or anything else into the picture unless a useful relationship can be defined. And even then, the relationship is not central to understanding the abstraction. For soundness purposes, it might be useful to translate postulates of sequence theory in set theory. This becomes metamathematics though. It does not affect the use of sequences.

Sunday, October 11, 2009

What is Abstraction?

I've been talking about the sin of confusing vagueness for abstraction for a while but I haven't stopped to define more precisely what it means. So far, the best I have done to give an understanding of what abstraction is is to say that it is a simplification of an object and that being abstract does not preclude being absolutely precise. Let's try to be more precise on what constitutes an abstraction and what does not. First, an abstraction is a simplification but a simplification does not qualify automatically to be called an abstraction. So, what is missing? If we wanted a completely precise expression, we would say that A is an abstraction for B with respect to a set of properties. The set of properties is usually universal in the context in which abstraction is used. For example, for software engineers, it will be interesting to see a specification as an abstraction for a set of implementations with respect to functional properties. It means that seeing the effects of a particular implementation, if we can always follow them by reading a specification and not get any surprise, the implementation is correct with respect to that specification. For somebody working with algorithms, the performances will be more important so it may be useful to have an abstraction with respect to running time but one that does not preserve functional properties. It is in this respect that I got convinced that having a small theoretical kernel for a modeling language (in Abrial's sense) is not appropriate useful or abstract. For instance, in Event-B, everything is understood in the context of set theory and I was opposed that using sequences was too difficult because of the complications of the underlying set-theoretic model. The key here is that the objection denotes a failure to create a level of abstraction where sequences are understood for what they provide. Providing a set-theoretic model for them is not useful for their use; it is useful to convince someone that the set of axioms describing sequences is sound. After that, we must understand sequences on their own. This is why, I believe, abstract data types are very important for the use of formal methods. Simon Hudon October 11th Meilen

Modeling vs Programming vs Coding

I usually adhere to Dijkstra's and Morgan's view of programming: it's the action of passing from a precise specification to an executable program. I've had some contacts recently with the Chair of Programming Methodology recently and I was expecting them to have a similar view to mine on what constitutes programming. I was a little disappointed to see that their view was more closely related to "writing down executable code" than "reflecting on the design of a suitable implementation for a given specification". What they call programming, I would rather call coding. Before that, I had a short talk with Dr. Jean-Raymond Abrial, founder of the B and Event-B formal methods where I saw that he shunned programming. However, I came to realize that what he called programming was also coding. On the other hand, he called the activity of central importance to software engineers "modeling" which is a term I have been trying to avoid for some time, now, because of its central role in methods like UML, which I find eminently ignorable for software engineers; it simply seems to stem from confusing vagueness with abstraction and I think there's no reasons to accept vague documentation for a design. I still prefer the word programming for the formal derivation of implementations but I feel more and more like I will have to find a substitute because of the lexicon associated with programming. It is associated with programs which is related to routines and sequential execution whereas I believe that sequential vs parallel execution is an implementation choice that should not receive half of the importance that it is usually given in the development process. To be absolutely unambiguous, I could rename the activity "designing a software system" taking the liberty of changing software with discrete if necessary but it might be a bit too long. However, design is an activity so widespread that it does not feel appropriate to use it on its own when talking about programming. For this purpose, I could resort to "modeling" but I run the risk of having to precise all the time that I have nothing to do with UML or their bubbles and arrows techniques. Ah, well! I'll keep thinking about it. In the mean time, I'll keep programming for short and design of software system to be unambiguous. Simon Hudon October 11th, 2009 Meilen