Before I begin, let me point out that the publication of this blog entry is made possible because I haven't used Blogger's interface to edit it (or at least, I used it as little as possible). Instead, I wrote it in a plain text editor using a markup language I am developping. Since I don't have developed an HTML generator yet, I had to convert it myself to HTML. Blogger did not make it easy but I think the result is much better than what I got previously by just fighting with the web interface. I would certainly appreciate any feedback concerning the resulting layout.
(end of note)
The first statement of this problem that I have seen is in Wim Feijen's note WF114 [0]. I really like the structure of his presentation but, whenever I have reproduced it, either for myself of to present the problem and a method to find a solution to somebody else, I have made on slight change: instead of using predicate transformers semantics, I use a relational semantics and I believe the switch to be quite significant from a practical standpoint. I still use the invariant based loop development techniques developped by Dijkstra & cie instead of the recursion / precondition technique of Hehner and I think it is very characteristic for my style. It shows that I prefer to manipulate one small part of the program at a time.
DEVELOPMENT ===========
The problem at hand consists in producing an array with the N first cubes without resorting to multiplication operations. Formally, the postcondition we are trying to satisfy is:
P0: (A i: 0 ≤ i < N: f.i = i^3)with f the output variable (a function). About notation '.' is the function application operator and (A i: R: T) is a universal quantification over the range R and with the term T. Here, we use the property that ≤ and < are mutually conjunctional which means that their combined application can be unfolded into
a ≤ b < c ≡ a ≤ b ∧ b < cThey are also mutually conjunctional with equality (=) whereas logical equivalence (≡) is not conjunctinal at all because it is associative and the two features of the operator would interfere. (end of note) We can now proceed to find a suitable invariant for our loop. A good technique to do so is to replace some constants in the postcondition by variables. Since it is a quantification, it seems reasonable to focus our search in the range. We find 0 and N. We could choose either or both but if we use 0 as our initial value and N as the final one, we get the first cube for free. We will therefore opt for replacing only N with a variable:
E0: k = N J0: (A i: 0 ≤ i < k: f.i = i^3)If we start k off at 0, the range of J0 is becomes false and the invariant is therefore trivially true. From there, it is not a big step to say that increasing k regularly will lead us to E0 (our variant is then N-k).
I0: k = 0 S0: k' = k + 1It is now time to see how we will maintain J0 (in the loop) while applying S0 to converge. Note on proof obligations If we use the technique of the invariant to build our loop, we have to fulfill a certain certain proof obligation. Assuming that J is the invariant, B is the body of the loop, E is the exit condition, IN is the initialization of the loop and P is the postcondition:
(0) J' ⇐ J ∧ B ∧ ¬ E "B (and E) preserve(¬s) J" (1) J ⇐ IN "IN establishes J" (2) P ⇐ J ∧ E "J and E lead to P"In the preceding definitions, primed predicates are predicates for which all unprimed program variables are replaced with their primed version. Unprimed variables designate their initial values (before the execution of a given statement or program) and the primed variables designate their final values.
(3) (B and E preserve J) ∧ (IN establishes J) ∧ (J and E lead to P) ≡ { IN } until E do B od { P }The last line is to be interpreted as a Hoare triple with assertions between brackets. It reads "started in a state satisficing IN, the program [here the while loop], finishes in a state satisficing P." (end of note)
(A i: 0 ≤ i < k': f'.i = i^3) = { S0 } (A i: 0 ≤ i < k + 1: f'.i = i^3) = { Split off one term } (A i: 0 ≤ i < k: f'.i = i^3) ∧ f'.k = k^3 = { Modify only f.k; S1: (A i: 0 ≤ i < k: f'.i = f.i) } J0 ∧ f'.k = k^3
S1: (A i: 0 ≤ i < k: f'.i = f.i)The last line of the provious proof makes a nice assignment. The only problem is that it requires some multiplications, we will continue out calculations rather than adopt it as our next assignment.
f'.k = k^3 = { Let's introduce a new variable to hold k^3 } { at all time. J1: a = k^3 } f'.k = a
S2: f'.k = a J1: a = k^3If we added the symmetric complement of S1, that is that everything beyond k in the values of f stays unchanged but it is not required yet. We add it afterwards to complement S1 and S2 as array assignments but it is not urgent.
(4) (J0' ≡ J0 ∧ S2) ⇐ J1 ∧ S0 ∧ S1Since the left argument of ⇐ is monotonic, we can weaken (4) by weakening its left-hand side and replacing the equivalence by a consequence (⇐). After applying the shunting rule (see (A) in the appendix), we have exactly the shape of the proof obligation for maintaining an invariant... except that invariant used to maintain J0 (i.e. J0 ∧ J1) is much stronger than J0. We can prove that we can maintain it too, though.
(5) J0' ⇐ J0 ∧ J1 ∧ S0 ∧ S1 ∧ S2To maintain J1, we will split it in two, manipulate the left-hand side of the equality since we know a lot about k and hope it will lead to operations on a.
k'^3 = { S0 } (k + 1)^3 = { Unfold _ ^3; ∙ over + } k^3 + 3∙k^2 + 3∙k + 1 = { J1 } a + 3∙k^2 + 3∙k + 1 = { We need a new variable to hold the term } { which contains a product J2: b = 3∙k^2 + 3∙k + 1 } a + b = { We want to fall back on J1'. S3: a' = a + b is } { good candidate. } a'
J2: b = 3∙k^2 + 3∙k + 1 S3: a' = a + b
(6) J1' ⇐ J1 ∧ J2 ∧ S0 ∧ S3It seems like we have just created some more work for ourselves but we could be reasured by the sight of the decreasing order of the equations in our new invariants.
3∙k'^2 + 3∙k' + 1 = { S0 } 3∙(k + 1)^2 + 3∙(k + 1) + 1 = { ∙ over + } 3∙k^2 + 6∙k + 3 + 3∙k + 3 + 1 = { J2 } b + 6∙k + 6 = { You know the drill! J3: c = 6∙k + 6 } b + c = { S4: b' = b + c } b'
J3: c = 6∙k + 6 S4: b' = b + c
(7) J2' ⇐ J2 ∧ J3 ∧ S0 ∧ S4Now for the last touch on the invariant:
6∙k' + 6 = { S0 } 6∙k + 6 + 6 = { J3 } c + 6 = { S5: c' = c + 6 } c'
S5: c' = c + 6
(8) J3' ⇐ J3 ∧ S0 ∧ S5We can now happily conclude
(9) S6 preserves J4with
S6: S0 ∧ S1 ∧ S2 ∧ S3 ∧ S4 ∧ S5 J4: J0 ∧ J1 ∧ J2 ∧ J3We started off knowing that J0 ∧ E0 ⇒ P0 so all that is left for a correct loop is to find the initialization. We already have I0 and, from there, we can easily calculate the initial values of the auxiliary variables. Let's ignore the calculations and see the initial values right away.
I1: a = 0 I2: b = 1 I3: c = 6We can see that there is no value specified for f and, as a matter of fact, any value will do. We now have the final lemma for the correctness of this loop.
(10) I0 ∧ I1 ∧ I2 ∧ I3 establishes J0 ∧ J1 ∧ J2 ∧ J3CONCLUSION ========== Looking at the end result, we can see that we ended up with a series of assignments (SX) but we haven't specified an order in which they have to be executed. It is because it is both simple and tedious to come up with an order and the order so arbitrary that it is all noise. For that reason, we can trust a compiler to do it for us. When we have to deal with a programming languages that doesn't support multiple simultanous assignments, we have to take the time to introduce the noise that is the order ourselves. The key here is that we must not overwrite a variable before we are done with its initial value. We have a bit of a problem when we run into an assignment where the use and the writing of variables are cyclicly dependant like in the following:
S6: x' = y ∧ y' = xTo solve the problem, we can add an auxiliary variable (say, t) and initialize it so that it can be substituted for every use of a certain variable.
I4: t = x
x' = y ∧ y' = x = { I4 } x' = y ∧ y' = t = { Assignment law (B) } (x := y) ; (x' = x ∧ y' = t) = { Assignment law (B) } (x := y) ; (y := t) ; (x' = x ∧ y' = y) ⇐ { Skip introduction (C) } (x := y) ; (y := t) ; skip = { Identity of ; (D) } (x := y) ; (y := t)And, of course, it is also simple to implement I4 and prepend it to the program we just derived. Note: Since we are interested in a program that ends up with I4, we are going to use primed variables. (end of note)
t' = x ∧ x' = x ∧ y' = y = { Assignment law (B) } (t := x) ; (t' = t ∧ x' = x ∧ y' = y) ⇐ { Skip introduction (C) } (t := x) ; skip = { Identity of ; (D) } t := xIn short, this is why I don't bother to do it with the body of my loop. Finally, the main interest of this programming exercise does not lie in the resulting program. Quite frankly, it accomplishes quite a banal task. However, the technique which led us to it is quite systematic. We used very little knowledge to find it and, unlike Feijen's version, we did not introduce assignments of '?' to mark the necessity to create a new assignments to then 'guess' what should be the expression. To be honest, it is not a very hard guess but it is still outside of the scope of a calculation. The fact that I integrated that choice in a calculation makes it easier to justify. Regards, Simon Hudon ETH Zürich APPENDIX ======== Unusual laws of propositional calculus
(A) A ∧ B ⇒ C ≡ A ⇒ (B ⇒ C) { "shunting" }Hehner's programming laws [1]
(B) (x := E) ; P ≡ P [x \ E] { "assignment law" P [x \ E] is the } { variable substitution notation, } { P is a predicate, x a program } { variable and E an expression } (C) x' = x ⇐ skip { For any program variable x } (D) P ; skip ≡ P { For any program predicate P }REFERENCES ========== [0] Wim Feijen, WF114 - The problem of the table of cubes, http://www.mathmeth.com/wf/files/wf1xx/wf114.pdf [1] Eric Hehner, A Practical Theory of Programming, http://www.cs.toronto.edu/~hehner/aPToP/
No comments:
Post a Comment