"SPECviewperf parses command lines and data files, sets the rendering state, and converts data sets to a format that can be traversed using OpenGL rendering calls. It renders the data set for a pre-specified amount of time or number of frames with animation between frames. Finally, it outputs the results.In the section "What SPECviewperf® Does and Doesn't Do" the writers state:
SPECviewperf reports performance in frames per second. Other information about the system under test -- all the rendering states, the time to build display lists (if applicable), and the data set used -- are also output in a standardized report."
"Nearly all benchmarks are designed for a specific purpose. Quite often, however, users broaden that purpose beyond what the benchmark is designed to do, or in some cases they assume the benchmark can't do something that it actually can do. The SPECviewperf® benchmark is no different: it has been both overextended and underappreciated, sometimes reducing its overall value to the user."Once again the results posted show little, if anything, to do with file fragmentation, and the graphs look quite scientific, but mean very little. It certainly doesn't show any clear difference between the products. I'm not sure what the professor means by
"Although minor points gains in many areas across the board each gained X.X point are in higher circles valuable scoring patterns and mean more sales to the resellers as corporate buying patterns are heavily based upon these results shown."It sounds like too much caffeine to me. I can't decode it at all.
Also, the results are shown "Without Fragmentation" in Windows, but I think the label meant to read "Without Defragmentation", since there is no mention of using the built-in Windows defragger in competition with Diskeeper, and the previous test refers to tests "... without any hard drive defragmentation tool enabled".
In the other section the professor lists the elapsed times measured when running "SPECapc for 3ds Max™ 9", which
"... measures performance based on the workload of a typical user, including functions such as wireframe modeling, shading, texturing, lighting, blending, inverse kinematics, object creation and manipulation, editing, scene creation, particle tracing, animation and rendering. The benchmark runs under both OpenGL and DX implementations of 3ds Max 9, and tests all the components that come into play when running the application."The 3D Professor review states these results but doesn't graph or tabulate them. I did the chart above in Excel, and calculate a 3.4% time saving with DK2007, and a 6.4% time saving with DK2008. The professor's conclusion is "A substantial improvements [sic] across the board, biggest improvement overall with Diskeeper 2008 installed on a fresh system."
As I understand it, the professor is saying that compared to a fragmented system, DK 2008's 6% improvement is substantial. What is missing from this analysis is the time the tests would have taken if the fresh system had been defragmented first. So far this is only an argument for using a defragger, not for using a specific defragger.
I am also extremely suspicious of the results for the Windows 2003 Server. They are exactly the same as the Windows XP results, down to the last second. Each machine has a different processor, different amounts of RAM, and different hard drive speeds. The only aspect in common is the "ATI FireGL V3600 Professional Graphic's [sic] Card". If the results are the same because the same graphics card is used, then why would a defrag product make the graphics card run 6.4% faster? I assume the results obtained are different from those published, and this is a cut and paste error.
Another point that needs clarification: was Diskeeper simply installed and enabled, but the manual defrag not run? The conclusion wording is vague (bold added):
"The amount of data and time needed to be collated and cross checked has been tremendous. At times when we undertake reviews here we just take for granted the overall performance that the products tested produce some remarkable results. Though on reflection we sit back and realise on just how much performance is obtained with no degeneration in system performance with the Diskeeper running on “the fly”!From this we can be sure that the background defrag was enabled, but no mention is made of an initial defrag. The "Software & Benchmarks Utilised" section lists the products used, and then states
... With Diskeeper 2008 we had all the possible functionality enabled during both Windows XP and Server 2003 tests. The intelligent mode in which Diskeeper functions utilising ... Windows API's is spectacular.
... Sincerely for the non-user's of this product, with this sort of performance increase to be gained from a simple effect software tool working away quietly in the background!
"A mixed combination of some 10GBs software and benchmarks - for those who are aware; fragment the hard drive into a complete and utter mess once all of the above is installed. Each set of tests were completed on a fresh installation of all the above software, each time we low level formatted the hard drives to clear the MBR’s etc. Therefore the time to complete each set of results has been substantial as results are triple checked to ensure that no anomalies are seen. This combination mix of benchmarks and software would therefore ensure that the whole system I/O was fully stretched to its limits and ensure that the new software from Diskeeper would function as its fullest workload capacity.Again, no mention of any manual defrag to fix the "complete and utter mess". This is troubling from several aspects:
- The results ignore the built-in Windows Disk Defragmenter, making DK appear better, because the tests compare a well-configured Diskeeper system with a badly configured Windows system.
- The background defrag would stop during the tests, but it may not have completed yet, making the results erratic, and prejudicing DK's true abilities.
- By starting from scratch each time, the I-FAAST file placement system is unable to work, because it needs several days to get data on the usage of the system in order to place the files correctly. Since I-FAAST constitutes 50% of the cost of DK2008 Pro Premier, this is a serious omission.
Then there is an interesting snippet in the latest white paper by Joe Kinsella, "The Impact of Disk Fragmentation", published on diskeeper.com. Figure 5 from that paper is shown below.
According to these figures, a fresh install of Windows XP followed by SP2 would create roughly 636 fragmented files with 3582 fragments. That doesn't include the 75 critical downloads of 185MB that follow SP2. The table above implies that this would generate several hundred more fragmented files.
For a fair test of Windows XP all these files would need to be defragmented, along with the 10GB of other software installed. Ideally this would be done by WDD. After this is done it could be backed up using Acronis True Image or DriveImage XML, to speed up reconfiguring the machines each time a fresh install was performed. The professor's review does not mention any details of this sort, which is worrying. Hopefully this is an oversight and the comparisons were fair after all. I have made several attempts to contact Professor Brian Robinson, and even searched for him on Google, without success.
Another look at the chart above could explain the 3.4% and 6.4% time savings as follows: DK2008 does appear to have a more effective background defrag capability than DK2007, and this is touted as “‘defragmentation intelligence’ enhancements” in the press release and the review. In that case the only effective conclusion we can reach is that DK2008 is 3.1% more effective than DK2007. I'm not sure that this was the professor's intention, but it is the only reliable conclusion I can make, based on the data he presents in these tests.