I have been thinking about the issue of what the "real" range of logic expressions is. (Some of this is a rehash of things I've already mumbled about at UtahLogic meetings I think that I'm at the point where it might be argued that it may actually be usefull. (I cover that at the end.)

I'll use "=>" below to mean some sort of implication, and "==" for logical equivalence

When one writes "P(x,y) => P(y,x)" one does the quick mental proof to get "P(x,y) == P(y,x)", and normally takes this as meaning that the truth value of P(x,y) is the same as the truth value of P(y,x).

Taking the [somewhat extreme] formalist view, there is nothing there that forces us to have the interpretation that "=>" applies in this case to truth values.

Take, for example, the case of the usual novice logic student error of writing something like:

(x+y) => (y+x)

Normally [and rightly] the logic instructor will point out that an obvious substitution yields

(5+6) => (6+5)

and basic arithmetic gives

11 => 11

and then asks the student what it means for a number to imply a number, and what the student might think that an expression like "5 => 6" would mean. And then, by the usual instructor induced Reductio Ad Absurdum, the student sees the error of his ways, and goes on to sin no more. [But there seems a big difference to me emotionally between 11=>11 which I like as an instance of P=gt;P, and 5=>6 which I don't like and has always smelled of inconsistancy.]

The instructor will also, of course, offer the "corrected" notation of "(x+y) = (y+x)" and will warn the student about his next probable error of confusing "=" with "equivalent".

**However**, a rather pedanticly formalist interpretation of "=>"
could take the domain as (expression) graphs, and the expressions on either
side as a notation to express specific graphs.
Then "=>" could mean something like the idea that we can transform the
(expression) graph on the left into the graph on the right. And Modus Ponens
then gets reinterpreted to mean that any time we have the graph on the left
in our proof, we can also add the graph on the right.

If we play further, in any system with semi-substitutablity of implication [p can be replaced by anything that implies it in antecedent position, and anything it implies in consequent position] or the stronger substitution of equivalents, the rules for dealing with equals and equivalent appear to be practicly equivalent.

Playing this game, the only way you ever see 5=>6 is if the underlying axioms of arithmetic are inconsistant and can generate things like 5=6. [*** Needs proof... I think this is straight forward, generating the inequality from equality if we have the inequivalence, has been straight forward in examples. ***]

Taking (1+1) as a graph, and 2 as a (degenerate) graph, and taking => and the "==" generated from it as a graph transformations, one can take (1+1) = 2 as saying any time you have (1+1) in an expression tree you can replace it with 2, and any time you have 2 you can replace it with the graph (1+1).

These two together mean that (1+1) is equivalent to 2, and they can be substituted for each other. We aren't committed to assuming that (1+1) "has a value" at all, only that it has the right form. It isn't an expression, it is now just a graph [that could, if you really wanted) be interpreted as an expression.

So, all the special manipulation rules for substituion of equals can be subsumed by those for equivalence. At this point, the Ghost of Instructors past makes an appearance and proclaims that although they obey the same rules they actually mean different things. (1+1) = 2 has arguments of arithmetic type, and logical equivalence only takes things of boolean value. (And then makes the concession that equivalence can deal with three valued logics, and then the concession of infinite valued logics, etc.) But from my formalist view, two things that obey exactly the same axioms are practically the same.

A simple example using equivalence instead of equality:

Assume (at least) the rules:

- Uniform substitution.
- Modus Ponens for '=>' (Given a=>b, and a, infer b)
- Syllogism for '=>', (Given a=>b, and b=>c, infer a=c)
- Adjunction for '&', (Given a, and b, infer a&b)

And the definition

- the definition of '==': (a==b is defined as a=>b & b=>a)

And the axioms

- Axiom 1: p & q => p
- Axiom 2: p & q => q

And the two constants

- R (the alleged right identity)
- L (the alleged left identity)

Proof that if an operator has a Left identity (L) and a right identity (R), they are equal:

1: a == op(L, a) : given: Defination of left identity "L" 2: a == op(a, R) : given: Defination of right identity "R" 3: R == op(L, R) : 1, a/R 4: L == op(L, R) : 2, a/L 5: R => op(L, R) & op(L, R) => R : 3, def of == 6: L => op(L, R) & op(L, R) => L : 4, def of == 7: R => op(L, R) : 5, Axiom 1, Modus Ponens 8: L => op(L, R) : 6, Axiom 1, Modus Ponens 9: op(L, R) => R : 5, Axiom 2, Modus Ponens 10: op(L, R) => L : 6, Axiom 2, Modus Ponens 11: L => R : 8, 9, Syllogism 12: R => L : 7, 10, Syllogism 13: L => R & R => L : 11, 12, Adjunction 14: L == R : 13, def of ==

Obviously this only works if you take the formalist view mentioned above and don't try to think of the "truth value" of a or b in a=>b.

**And now the real issue, Why should anyone care about the ideas above?**

Automated theorem provers generally have facilities for handling equivalence, and often separate facilities to handle equality.

If one takes the view above, one can cut down dramaticly on the framework that one needs to have implemented, and cuts down dramaticly on the actual number of axioms that one needs to process for a given problem domain by using the machinery for equivalence to handle equality.

Feedback welcome: John.Halleck@utah.edu

© Copyright 2007 by John Halleck, All Rights Reserved. This page is http://www.cc.utah.edu/~nahaj/logic/musings/equality.html This page was last modified on November 23rd, 2011