Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP Path: utzoo!mnetor!seismo!ll-xn!ames!amdahl!nsc!grenley From: grenley@nsc.nsc.com (George Grenley) Newsgroups: comp.arch Subject: Re: Benchmarking Message-ID: <4320@nsc.nsc.com> Date: Wed, 13-May-87 15:57:20 EDT Article-I.D.: nsc.4320 Posted: Wed May 13 15:57:20 1987 Date-Received: Sat, 16-May-87 05:47:52 EDT References: <4294@nsc.nsc.com> <28200036@ccvaxa> <272@astroatc.UUCP> Reply-To: grenley@nsc.UUCP (George Grenley) Organization: National Semiconductor, Sunnyvale Lines: 50 It's nice to see some discussion on this issue. I think, based on the response so far, that we need to fork this discussion into two categories. One would be "CPU Benchmarks", which us chip peddlers would like, and a "System Benchmark", which is what real users want. In article <272@astroatc.UUCP> johnw@astroatc.UUCP (John F. Wardale) writes: >Ok, so here I am, a new developer.... I need to buy a unix box. >(I'm developing code for some flavor of unix.) So I select about >a dozen "likely" candidates; put my sources on each; then run the >following on each: > time "touch types.h;make unix" >or some other, similar or reasonable thing. I compare the speeds, >and costs, and buy the one that's most effective for me. > >While this is the best approach for selecting box to compile >kernels on, it has the following problems: > >1) Its an expensive (time consuming) exercise. Why? Assuming media portability, one ought to be able to run it in a reasonable amount of time - either don't compile ALL of Unix, or just start it running and go on about your other duties. >Grenley is right! A lot of people what/need a "system >performance" benchmark! I wish that benchmarks like dhrystone >*INCLUDED* the time-to-compile-link-etc. in the time for dhry's >per second! This would make (generally slow) super-optimizing >compilers look less good, while improving the lightening fast >(direct to memory -- ala turbo-pascal) compilers that may generate >slightly poorer than average code. The answer, of course depends on whether you're a programmer or a user. Most machines compile a program once and run it many many times, so efficiency of compiled code is more important than compile time. Unless, of course, you are the programmer... In general, though, if machine A runs a dumb compiler twice as fast as machine B, it will run a smart compiler pretty close to twice as fast, too. So, for benchmarking, we can use any compiler. My employer is justifiably proud of its new optimizing compiler which gets about 20% faster code than the old one - so the system performance goes up 20% with the same H/W - 20% performance bump for free! But, you have to ask yourself, is it `fair' to include compiler improvements in CPU benchmarks? Some say not; I disagree. We are interested in the H/W that produces the most overall performance. After all, the whole idea behind RISC is that they're easy to write optimizing compilers for. It wouldn't be fair to not allow them to use them.