

"SPECviewperf parses command lines and data files, sets the rendering state, and converts data sets to a format that can be traversed using OpenGL rendering calls. It renders the data set for a pre-specified amount of time or number of frames with animation between frames. Finally, it outputs the results.In the section "What SPECviewperf® Does and Doesn't Do" the writers state:SPECviewperf reports performance in frames per second. Other information about the system under test -- all the rendering states, the time to build display lists (if applicable), and the data set used -- are also output in a standardized report."
"Nearly all benchmarks are designed for a specific purpose. Quite often, however, users broaden that purpose beyond what the benchmark is designed to do, or in some cases they assume the benchmark can't do something that it actually can do. The SPECviewperf® benchmark is no different: it has been both overextended and underappreciated, sometimes reducing its overall value to the user."Once again the results posted show little, if anything, to do with file fragmentation, and the graphs look quite scientific, but mean very little. It certainly doesn't show any clear difference between the products. I'm not sure what the professor means by
"Although minor points gains in many areas across the board each gained X.X point are in higher circles valuable scoring patterns and mean more sales to the resellers as corporate buying patterns are heavily based upon these results shown."It sounds like too much caffeine to me. I can't decode it at all.
"... measures performance based on the workload of a typical user, including functions such as wireframe modeling, shading, texturing, lighting, blending, inverse kinematics, object creation and manipulation, editing, scene creation, particle tracing, animation and rendering. The benchmark runs under both OpenGL and DX implementations of 3ds Max 9, and tests all the components that come into play when running the application."

"The amount of data and time needed to be collated and cross checked has been tremendous. At times when we undertake reviews here we just take for granted the overall performance that the products tested produce some remarkable results. Though on reflection we sit back and realise on just how much performance is obtained with no degeneration in system performance with the Diskeeper running on “the fly”!From this we can be sure that the background defrag was enabled, but no mention is made of an initial defrag. The "Software & Benchmarks Utilised" section lists the products used, and then states... With Diskeeper 2008 we had all the possible functionality enabled during both Windows XP and Server 2003 tests. The intelligent mode in which Diskeeper functions utilising ... Windows API's is spectacular.
... Sincerely for the non-user's of this product, with this sort of performance increase to be gained from a simple effect software tool working away quietly in the background!
"A mixed combination of some 10GBs software and benchmarks - for those who are aware; fragment the hard drive into a complete and utter mess once all of the above is installed. Each set of tests were completed on a fresh installation of all the above software, each time we low level formatted the hard drives to clear the MBR’s etc. Therefore the time to complete each set of results has been substantial as results are triple checked to ensure that no anomalies are seen. This combination mix of benchmarks and software would therefore ensure that the whole system I/O was fully stretched to its limits and ensure that the new software from Diskeeper would function as its fullest workload capacity.Again, no mention of any manual defrag to fix the "complete and utter mess". This is troubling from several aspects:
- The results ignore the built-in Windows Disk Defragmenter, making DK appear better, because the tests compare a well-configured Diskeeper system with a badly configured Windows system.
- The background defrag would stop during the tests, but it may not have completed yet, making the results erratic, and prejudicing DK's true abilities.
- By starting from scratch each time, the I-FAAST file placement system is unable to work, because it needs several days to get data on the usage of the system in order to place the files correctly. Since I-FAAST constitutes 50% of the cost of DK2008 Pro Premier, this is a serious omission.

A Closer Look at the DK2008 Review on 3DProfessor.org: Part 1 | Part 2 | Part 3 | How Fast is I-FAAST™? | Diskeeper 2008 Professional: Preliminary Results | First Impressions | Diskeeper 2007 Review | Benchmarks: DK2008 and DK2007
No comments:
Post a Comment