Monday, May 18, 2009

SH48 - Weakening Preconditions

I remember an instant messaging conversation I had with Prof. Jonathan Ostroff (YorkU) where he said he was buying more and more the view of preconditions as waiting conditions like in Event-B and SCOOP. I objected that, obviously, unlike Event-B, we can't hold the position that preconditions must be strengthen. It was, as I thought, some kind of law of nature and common sense that only postconditions can be strengthened. Preconditions can only be weakened. But what if it does not fit the essence of object oriented programming? Certainly, it is acceptable that the precondition of routines be weakened in the process of refinement. Since features in classes are implemented as routines (whenever they use arguments), why not just carry over the rule to the implementation of class features?

The fact that they are type bound makes an important difference. Rather than understand that the target of a call is just another kind of argument to a routine that will be dynamically chosen so that it meets (in a semi-formal way) the specifications of the routine, it should be understood as an event that occurs that (might) transform the current state of an object. That event can be or not provoked by a certain client. Whenever it is, it does not matter for the correctness of that client that the precondition gets weakened in the refinement of its supplier. On the other hand, since object aliasing is an important part of object oriented programming, considering the transformation of an object by other clients makes sense as the general case. If we assume that no other clients will invoke a certain feature unless its guard (or precondition) is satisfied, weakening it might produce surprising results. On the other hand, if we allow ourselves to blur the distinction between a class and an event-B abstract machine, we might want to make sure that the deadlock condition of a module is not strengthened in its refinements. It would mean that, in any refinement, if the guard of an action is strengthened, it must only be in the context of an action split where the guard of all resulting actions complement each other in a way that their disjunction is equivalent to the guard of the refined event.

The door does not seem to be completely closed for the notion of precondition weakening but I, for one, would not dwell on it.

Simon Hudon May 18th 2009 ETH Zürich

Friday, May 15, 2009

SH16 - Sacrificing Efficiency for Some Degree of Confidence

Is the concern for software correctness getting in the way of efficiency? or Sacrificing efficiency for confidence (as opposed to correctness)

Post-scriptum note: the opposition between correctness and confidence is based on the objectivity of the criterion of correctness and the subjectivity of the criterion of confidence. Confidence can be the result of obliviousness.

Also, I have not reread it since posting it on my blog and it might need some moderation. (End of note)

With modern trends like test driven development, it seems like, for the sake of having a testable system, one will always (or most of the time) take the shortest path allowing him to add just one feature. It is positive to see that he always worry about having something that works, albeit using very restrained notion of what a "working system" is. However, the fact that the modern programmer (or, to borrow a more trendy term, the average programmer) is unable to reason in any effective way about the formulation of his specifications or even those of his architecture makes him unable to postpone the implementation until such a time as his design has been validated on its own. This points out to a blatant lack of separation of concerns. A more disciplined approach would require that the programmer be able to validate his architecture using nothing but the information contained in it and the information contained in the formulation of the relevant requirements.

It is very important to make precise what is meant by validation. Like the term coined for any profound notion, it has been used to designate so many different concepts that, it cannot be understood as anything more than a buzz-term unless it is explicitly defined.

The validation of an architecture means that any implementation following its rules is bound to produce correct result for every valid input. This means that it is a validation impossible to accomplish using testing. Some might argue that model checking would be a valid choice to come to grip with that separation of concern. If we forget everything else, they could be right. The problem with model checking is that we still rely on a posteriori verification: the concern for correctness still fails to guide the programmer to his solution. Proponents of the notion of the "average programmer" might be satisfied with that since they accept that the layman has, at best, a third rate intellect and we cannot ask him reason in any formal way about his work. However, it is clearly a better approach than postponing any kind of validation until after an implementation has been provided.

What I'm coming at now is that since the implementations are rushed at, programmers don't seem to take any care for making it efficient or correct in a general way. All that is expected is that the test suite passes. If any efficiency issues arises (much later in the development) it is considered that it is the good time to address them. It seems that an efficient implementation would have been easier to produce at the very first moment that the programmer started writing it.

Simon Hudon Gatineau, December 2008 continued in Zuerich, May 2009