Content area
Full Text
Benchmarking, a topic dear to the hearts of drive makers, is fraught with trouble for reviewers. A benchmarkalways one word-refers to software that measures performance of other software or hardware products in a predictable and repeatable fashion. It certainly sounds straightforward enough.
What's the trouble, then? Benchmarks seem useful for comparison shopping, where drive X delivers data marginally faster than drive Y but costs significantly fewer dollars and, thus offers a sound reason for purchasing drive X over drive Y. But while manufacturers typically publish benchmarks for their own drives, the usefulness of these time trials as a standard for comparison is undermined by variations in techniques and testing environments, not to mention vested interest in scoring their own products favorably.
Benchmarks also can help determine the fitness of a drive for a particular task. Specifying too slow a drive for playback of high-performance video can put a crimp in a project. A benchmark program can also serve as a diagnostic tool. If a previously speedy drive begins to drag, perhaps its optics need cleaning.
Third-party benchmarks done with known and published test conditions are, of course, a much better basis for comparison. Trouble creeps in when we attempt to define a benchmark's "predictability" and "repeatability." Getting trustworthy benchmark data can present a few subtle and not-sosubtle problems.
WHAT TO MEASURE, AND HOW? GETTING DOWN AND DIRTY ON BENCHMARKING
Greg Smith, a Tucson, Arizona-based CD-ROM consultant and benchmark author represents one useful source for understanding what's involved in getting accurate benchmark results. Smith's latest offering, CD Tach, was released in late 1996 through startup TestaCD Labs of San Jose, California. CD-Tach's $29.95 list price is designed to make the software affordable for virtually all drive users-a claim not all benchmark-tool vendors can make.
CD Tach runs under Windows 95 and uses a dedicated CD-ROM that includes...