Tuesday, June 21, 2011

Why I Prefer Predicates to Sets

Various recent comments by others have made me realize that it is about time I write a comparison between sets and predicates.

One comment that springs to mind is a response I got when I said that I almost never use sets, that I prefer predicates. I was asked what was the point to choose since predicates can be represented as sets: a predicate can be seen as a boolean function which in turn can be seen as a functional relation which is basically a set of tuples. More directly, for every predicate you can associate a set which contains exactly the objects which satisfy the predicate. To which I have to add that for each set, there exist the characteristic predicate which is satisfied exactly by the object contained by the set.

What we just established is the semantic equivalence between the two. Such equivalence is less important than it might seem. From a logician's point of view, the equivalence tells us that for each theorem of predicate logic, there should be an equivalent theorem in set theory.

From an engineering and scientific point of view, that is from the point of view of someone who wants to use either of these tools, there is an important difference, one of methodological importance, which, understandably, is irrelevant for logicians.

The difference lies in the flexibility of the related notation. It is hence of syntactic nature. If we want to let the symbols do the work --and I do--, we have to make sure that we can find solutions in the simplest possible way.

One way of doing this is to avoid spurious distinction, i.e. to preserve symmetry as much as we can. This is where predicate calculus [0] is superior to set theory. If we start by encoding set theory into predicate calculus, we will see, little by little, a fundamental difference emerge. In the following, capital letters will stand for sets and their corresponding predicates will be the same letter in lower case. As an example:

	(0)  x ∈ P  ≡  p.x
		for all x

with the dot standing for function application and ≡ for logical equivalence [1]. It relates a set to its characteristic predicate.

Note:
Please notice that the convention of upper cases and lower cases is nicely symmetric. We could have defined characteristic predicates of a set as cp.S or the set of a predicate as set.p but we chose not to in order to preserve the symmetry between the two. Instead of saying that one is the characteristic predicate of the other, we say that they correspond to each other and it does not have to appear everywhere in the formulae where they appear.
(end note)

Notation

Here is a little bit more of syntax before I start the comparison. I take it mostly from [DS90].

If we want to say that a predicate p holds exactly when q holds we can express it as follows:

		〈∀ x:: p.x ≡ q.x〉

It is a bit too verbose for something that is supposed to relate p and q very simply and, for that reason, we introduce a shorthand. Instead of quantifying over x and using x everywhere in the formula, we will have an operator that will say: this predicate holds everywhere:

	 	[p ≡ q]

It is more concise and it is an improvement. It might seem like cheating though unless we know of the notion of pointwise extension of operators. For instance, ≡ applies to boolean operands but we allow ourselves to apply it to compare boolean valued functions too. This is how it then behaves:

	(1)  (p ≡ q).x  ≡  p.x  ≡  q.x
		for all x

We implicitly postulate that all logical connectors apply to boolean-valued functions. If we add one more definition, the previous universal quantification can be calculated to a form with an everywhere operator.

	(2)  [p]  ≡ 〈∀ x:: p.x〉
		 〈∀ x:: p.x ≡ q.x〉
		=   { pointwise extension }
		 〈∀ x:: (p ≡ q).x〉 
		=   { (2) with p := (p ≡ q) }
		  [p ≡ q]

Translating Sets

We can start our comparison by translating sets into predicate formulations. We will concentrate on set equality, subset, intersection, union, subtraction and symmetric difference.

	(3)  P = Q  ≡  [p ≡ q]
	(4)  P ⊆ Q  ≡  [p ⇒ q]
	(5)  x ∈ P ∩ Q  ≡  (p ∧ q).x
	(6)  x ∈ P ∪ Q  ≡  (p ∨ q).x
	(7)  x ∈ P \ Q  ≡  (p ∧ ¬q).x
	(8)  x ∈ P △ Q  ≡  (p ≢ q).x

with △ being the symmetric difference, the set of elements that are present in one set and ≢ being the discrepancy, the negation of ≡. We see that the first operators are translated into a form with one logical connector and the everywhere operator. Also, no operator is built uniquely around either of the implication or the equivalence. Therefore, there is not a unique translations of expressions like the following:

	[p ≡ q ≡ r ≡ s]

To allow us to manipulate the above, we introduce a definition for set equality. It can be calculated from our previous postulates.

	(9)  P = Q  ≡ 〈∀ x:: x ∈ P  ≡  x ∈ Q〉
		  [p ≡ q ≡ r ≡ s]
		=   { double negation }
		  [¬¬(p ≡ q ≡ r ≡ s)]
		=   { ¬ over ≡ twice, see (10) below }
		  [¬(p ≡ q) ≡ ¬(r ≡ s)]
		=   { (11) }
		  [p ≢ q ≡ r ≢ s]
		=   { everywhere expansion (2) }
		 〈∀ x:: (p ≢ q ≡ r ≢ s).x〉
		=   { pointwise extension of ≡ }
		 〈∀ x:: (p ≢ q).x ≡ (r ≢ s).x〉
		=   { (8) twice }
		 〈∀ x:: x ∈ P △ Q ≡ x ∈ R △ S〉
		=   { (9) }
		  P △ Q  =  R △ S
	(10)  [¬(p ≡ q) ≡ ¬p ≡ q]
	(11)  [¬(p ≡ q) ≡ p ≢ q]

Of course, for someone used to it, this is a simple conversion and there would be no need to be this explicit but I include it nonetheless for the sake of those who are not used to it.

Our formula is therefore proven to be equivalent to the one below.

	P △ Q  =  R △ S

we could also introduce the negations elsewhere and get:

	P  =  Q △ R △ S

Since equivalence is associative -- that is: ( (p ≡ q) ≡ r ) ≡ ( p ≡ (q ≡ r) )-- and symmetric --that is: (p ≡ q) ≡ (q ≡ p)--, there are a whole lot of other formulae from set theory that are equivalent to the predicate calculus formulae we showed. All those results create many theorems which basically "mean" the same thing but that one may have to remember anyway. As a comparison, our one predicate can be parsed in various ways and we can use it to substitute p ≡ q for r ≡ s or r for p ≡ q ≡ s to name just two usages of a simple formulae.

One could suggest that the problem lies with the usual reluctance of logicians to talk about sets complement, that if we had an operator, say ⊙, to stand for the complement of △ --which can be seen as problematic because it creates a set with elements taken from neither of its operands--, we wouldn't need to introduce spurious negations. The above formula could then be translated in:

	P ⊙ Q  =  R ⊙ S 

It remains to choose which equivalence becomes an equality and which ones become the complement of a symmetric difference. I believe the biggest problem to be that set equality mangles two orthogonal notions together: boolean equality and (implicit) universal quantification. The same goes for implication.

In short, this is one example where set theory destroys a beautiful symmetry and introduces spurious distinctions that pollute one's reasoning rather than help it. Whenever I have problems to solve and that calculations can help me, in the vast majority of cases, I use predicate calculus rather than set theory.

Simon Hudon
April 10th, 2011
Meilen

Notes

[0] I use the calculus, not the logic. The difference is that the logic deduces theorems from other theorems whereas the calculus simply helps us create relations between formulae and find new formulae from old ones whether they are theorems or not.

[1] Like many others, I have forsaken the double arrow for that purpose. The choice can be traced back to [DS90] and [AvG90].

References

[AvG90] Antonetta J. M. Gasteren. 1990. On the Shape of
       Mathematical Arguments. Springer-Verlag
       New York, Inc., New York, NY, USA.

[DS90] Edsger W. Dijkstra and Carel S. Scholten. 1990. Predicate
       Calculus and Program Semantics. Springer-Verlag New
       York, Inc., New York, NY, USA.

No comments:

Post a Comment