Your understanding of English is just fine. "Order codes" are "instruction sets" ("ISA's" in modern computer architecturese). Unfortunately, we're going to have to take this particular discussion off-line, as it is very long and (to the non-interested) very obscure. An emphasis on "performance" is wonderful, but it begs the question about what you're trying to optimize. The use of benchmarks is intended to provide the corpus that you use to evaluate the ISA. The hope is that the benchmarks will be representative of "real" programs, so that not only will the benchmarks run fast, so will the "real" programs. So the choice of benchmarks is critical to the end result of the optimization process. Unfortunately, computer languages also follow Zipf's Law, which means that if you want improvement in the implementation of a very complex system with thousands of different algorithms & modes, you may have to consider hundreds of portions of the code -- i.e., if you sort the "hot spots" in the code in order of popularity, even the hundredth most popular has a measurable influence. So small benchmark sets cannot possibly provide good enough coverage to adequately represent "real" systems. As I have argued in a series of papers in the early 1990's, many benchmarks themselves are flawed, because they don't solve the actual problem they are intended to solve in the best possible way. Thus, any ISA optimization that utilizes such a benchmark will enshrine bad programming practise because the "bad" programs will run faster than they "should", and the "good" programs will run slower than they "should". I don't know what benchmarks are currently being used, so the following example isn't accurate, but it may give you an idea what I'm talking about. Suppose that a benchmark uses a suboptimal "bubble sort" algorithm (O(n^2)) instead of a more efficient (O(n*log(n))) recursive algorithm. Any ISA that is optimized utilizing this benchmark will place a greater emphasis on the shuffling the data, than on making sure that the recursion is fast. I think that ISA's should be designed to reward good programmers rather than penalize them. There's an even longer discussion about what constitutes a "good" programming language. The human race has been programming for somewhere between 60-80 years (depending upon whether you want to go back to Church or Von Neumann), whereas human language has probably been around for perhaps 100,000-500,000 years. I would claim that progress in programming languages still has some ways to go, but most CS textbooks consider computer languages to be a dead issue (= solved problem), and C/C++/C#/Java are the culmination. I would imagine that this is another of the issues that TK was trying to touch on. I don't know who corrupted the phrase "lies, damned lies, and statistics" into "lies, damned lies, and benchmarks", but it is probably even more true of benchmarks. Following Euler's/Gauss's Law (the name on a theorem is almost always wrong), this famous phrase is usually attributed to Twain or Disraeli, but in fact, Twain incorrectly attributed it to Disraeli: http://www.york.ac.uk/depts/maths/histstat/lies.htm At 12:30 AM 6/5/2006, Joerg Arndt wrote:
Lacking background about order codes (what is it?) and not a native english speaker I may miss the point here. Anyway:
* Tom Knight <tk@csail.mit.edu> [Jun 05. 2006 08:29]:
Some of us think that the Hennesey and Patterson book is the worst thing to happen to computer architecture ever. The obsession with performance,
Me: obsessed with performance. What exactly is _bad_ about it?
as measured by broken benchmarks
Hmmm... my copy of H/P "Computer Architecture, A Quantitative Approach" is the second edition. It is very critical about benchmarks, see pp.21ff Synthetic benchmarks and MIPS get a royal spanking at pp.44ff
written in languages which are incapable of rational expression made the
FORTRAN, and C However, these are compiled, at least today, to machine code that is very close to what a highly experienced coder can achieve with pure assembly. 99.9 percent of all programmers will not be able to improve on what the compiler emits.
Assuming you'd like to see lisp here: wouldn't lisp-benchmarks just show how close the algorithms in the lisp-engine come to peak performance? And then, isn't lisp written in C? :o)
entire field a bad joke.
I've seen enough art for art's sake publications in computer science to be very happy about H/P's approach. I wish there was a up to date (wrt. current CPUs) edition.
The only thing that saved their collective butt was the improvement in silicon technology. Imagine what we could do with that technology and a decent architecture and language.
What should it look like?
CPU: for some reason internal-RISC, external-CISC seems to have won the race. Note how spectacular the latest approach (Itanium) has failed. Surely not for lack of budget.
Language: ruby, python, caml, scheme, ... All bad? Not naming nose-to-CPU languages here (such as C).
On Jun 4, 2006, at 1:39 PM, Henry Baker wrote:
Hennessey & Patterson's book "Computer Architecture: A Quantitative Approach" put the last nail in the coffin re pretty order codes: [...]
me clueless, jj