Wednesday, March 23, 2011

Done in One Step

This is a short note to relate an anecdote that just happened to me. I'm learning Rutger M. Dijkstra's computation calculus by reading his paper with the same name and I come across a very short proof.

The objects of the calculus are computations taken as predicates over state traces. The result looks like a relational calculus in that in never names the computations it is dealing with. It adds • ; ~ to the usual predicate calculus. • takes a state predicate `p', this is a predicate over traces that is satisfied only by traces of one state, and makes it a predicate satisfied by traces starting with a state satisfying `p'. It is captured by definition (13).

(13)	•p = p;true

where ; is the sequential composition of computation. `a;b' can be interpreted as constructing a computation satisfied by traces produced by executing `a' first then `b'. For the proof, all we need is the state restriction property. It applies to state predicate `p' and arbitrary computation `s'.

(8)	p;s  =  p;true  ∧  s

Finally, ~ is a negation of a state predicate that also yields a state predicates. That is to say that computations form a predicate algebra and so do state predicates, using ~ instead of ¬. And now, here is the proof. Don't spend to much time reading it, the second step is far from obvious.

	  [ •~p ⇒ ¬•p ]
	=   { Shunting, definition (13) (see below) }
	  [ p;true ∧ ~p;true  ⇒  false ]
	=   { state restriction (8) thrice }
	  [ (p ∧ ~p);true  ⇒  false ]
	=   { Predicate calculus }
	  [ false;true  ⇒  false ]
	=   { false is left zero of ; }
	  true

I decided to see for myself how the second step works and my calculation took five steps instead of one. After struggling with it for a minute or so in my head, I decided to make appeals to associativity explicit in order to consider it as a manipulation opportunity. One last thing: the weaker state predicate is 1, it plays the role of true in the related predicate algebra and we can express that any predicate `p' is a state predicate by making it stronger than 1: [ p ⇒ 1 ]. In my calculation, ~ doesn't pay a role so I replace `~p' by `q'. Therefore, we are trying to transform p;true ∧ q;true into (p ∧ q);true using equality preserving steps.

	  p;true ∧ q;true
	=   { (8) using p is a state predicate }
	  p;(q;true)
	=   { At this point, all other application 
	      of (8) would take us back a step or
	      would lead nowhere, let's use 
	      associativity }
	  (p;q);true
	=   { Now we can recognize the left hand 
	      side of (8) so we apply it using that p
	      is a state predicate again }
	  (p;true ∧ q);true
	=   { We can now take advantage of the
	      fact that q is a state predicate for the
	      first time with [ q ⇒ 1 ] or, rather
	      [ q  ≡  q ∧ 1 ] }
	  (p;true ∧ 1 ∧ q);true
	=   { Now p;true ∧ 1 matches the right
	      hand side of (8) and we can use it
	      to eliminate true and we can 
	      eliminate 1 because it is the identity
	      of ; }
	  (p ∧ q);true

There are mostly two things that helped design the above computation.

  1. We kept track of the properties of that were left unused through the calculation. It especially help to move forward

  2. Realizing that some intermediate formulae would contain a series of consecutive ; and that we might get to manipulate different groupings lead us to make the appeals to associativity explicit and made apparent the best way to use (8) for the second time.

Although I'm happy to use the opportunity to practice various techniques for designing formal proofs, the writing of the above was prompted by a (mild) dissatisfaction at Dijkstra's proof for leaving out most of the helpful and relevant bits.

Simon Hudon
March 23rd, 2011
ETH Zurich

No comments:

Post a Comment