Aside In a recent programming experiment where I derived my code from its proof of correctness, I was parsing a string and, since all the properties of a language can so elegantly be formulated in terms of strings of character, I used strings to represent the state of the computation. I had my input string S, the processed string x and the left over string z and they were linked by the invariant:
S = x ++ z
with ++ the string concatenation operator. At each iteration, I split z in two parts such that
z = [ y ] ++ wand scanning one character was encoded as:
x, z := x ++ [ y ], w
If you can't help yourself and have to imagine the memory footprint and performances of this algorithm, you might be scandalized that this is really inefficient. This is not meant to be executed as such; it is only a very simple way of understanding defining what the algorithm does. With one data refinement, I went to a representation where the use of memory is limited to the array representing the input, an integer index indicating which character y is and two booleans to represent the acceptance state of the algorithm. There is no efficiency problem with that, and my proof of correctness and my proof of data refinement are very simple and I consider them the clearest documentation for the algorithm.
end of aside
In summary, I think the right way of seeing that problem is:
Pattern Matching is a power way of associating properties or code with the shape of the data and we need to be able to express it. Since we can use it with code, it would be important to have programming techniques to make the executable, that is, techniques usable by a programmer to transform a specification using pattern matching into executable code implementing it. If the methods are especially effective and simple, we could have it implemented in our compiler but this is far from being a design consideration for the construct.
* * *
More recently, in my compiler design class, we were presented a research programming language which is said to be "relationship-based". I think the idea is a good one: they factor the inter-class relationship out of classes to treat them as separate modules. The question whether each such relationship deserve its own module is worthy of debate but, for now, what bothers me is that they treat it as a programming language construct, that is to say they keep an "implementability anchor" in their heads while they design the notation. One of the consequence is that they will refrain from adding anything for which they aren't sure they can compile it automatically. I would rather see this emerge as a specification structuring artifact and see some programming techniques emerge in connection to it so that the most common one can be implemented quite straightforwardly. For the rest, it's open to experimentation and, in all cases, a case by case implementation would allow optimization that systematic compilation wouldn't be able to do.
I think the big problem here is still the taking of current practices as a guideline for academic research: "since programmers don't write specifications, we won't provide a means to write good specifications even though progress would require it". I put it in my "more of the same" bin, look at the original ideas and forget about the language. Actually, I will do so when my semester is over.
Simon Hudon
ETH Zürich
finished on November 18th
No comments:
Post a Comment