Rich, et al: Actually, one of the hardest parts of IF simplification -- in compilers, at least -- is the fact that various things aren't allowed to be evaluated if the boolean fails. With "and" and "or", you aren't even allowed to evaluate further than necessary to conclude T or F. This isn't mere pedantry. A good deal of modern computer hardware is devoted to "optimistic/speculative execution", in which _both_ arms of the conditional are evaluated in parallel and are available for use when the boolean part finishes. Any side-effect of the losing arm of the execution is reversed (or simply never stored), and hence thrown away. Speculative execution is coming under some criticism these days because it tends to waste _power_ & _energy_, and hence can increase the heat load that needs to be removed. Compilers still tend to push the limit when moving stuff out of conditionals & loops. Compilers are willing to compute indices of arrays _in advance_ of checking for the end of the array, so modern compilers may have to allocate arrays with a certain amount of "margin" to keep from causing a page fault or segmentation fault if some elements are accessed beyond the end of the array. IEEE "NaN's" were developed to avoid interrupting floating point units, with the hope that these "numbers" would later be discarded. I don't know that there is a complete & consistent answer for these "partial function" issues. As hardware continues to get more aggressive, the boundaries for speculation continue to get pushed back. At 11:37 AM 3/18/2011, rcs@xmission.com wrote:
Here's a Logic & Programming challenge:
Simplify IF expressions.
The general problem is NP-complete, since nested IF expressions can easily encompass Satisfiability problems. It might be even harder, since simplifying an IF expression down to True or False (when applicable) would require a short (or at least feasible) proof of equivalence, while SAT requires only an example.
Nevertheless, it's reasonable to ask for partial solutions. The "simpler-than" ordering includes a lot of beholder input, and also depends on the application at hand.
I'll offer a few "rules" to start the bidding.
If(A,B,C) with A a predicate, B and C arbitrary types (usually the same), has the value B when A is true, and C when A is false. Other notations include A?B:C, Choose(A,B,C), Boole(A,C,B), with Choose and Boole sometimes interpreted per individual bit.
If(True,B,C) -> B If(False,B,C) -> C If(A,B,B) -> B If(A,B,~B) -> A xor B (B a predicate) If(A,B,C) <-> If(~A,C,B) If(A,B,C) <-> If(A, A&B, (~A)&C ) If(A,B,C) <-> B + If(A,0,C-B) (B and C numerical) If(A,B&C,BvC) <-> Majority(A,B,C) (B and C predicates) If(A<B,A,B) <-> Min(A,B) If(A&B,C,D) <-> If(A,If(B,C,D),D)
A similar challenge can be made for Loop expressions. We've adopted a convention for some binary operations, to represent a variable-length version as a kind of summation, using a larger or boldface version of the operator, with upper & lower limits specifying a range, and an expression for argument_k. Would it be useful to change to a generic Loop expression? Most of the simple manipulations for sums and products are applicable to Loops too.
Rich