The only "extra" feature is this version is I-FAAST, and it costs an extra $50 per workstation, more than the cost of any other defrag utility. According to the web site
Diskeeper's I-FAAST 2.0 (Intelligent File Access Acceleration Sequencing Technology) accelerates file access times to meet the heavy workloads of file-intensive applications. Utilizing a specially formulated technology, I-FAAST closely monitors file usage and organizes the most commonly accessed files for the fastest possible access, boosting file access and creation to speeds above and beyond the capabilities of your system when it was new, up to 80% faster.I found an interesting 2005 document entitled "Benchmarking Diskeeper’s I-FAAST" which explains that Diskeeper monitors all files being opened and modified, and uses this statistical information to work out which files are more important.
"I-FAAST was designed specifically for real world environments where data is a heterogeneous mix of often used or “Hot” files, and stale or duplicate and unused data. In real world scenarios, it is also true that data goes through “hot and cold” phases, and that a significant amount of data is, or must be kept, for regulatory or archival purposes; often intermixed on active storage media. All these and other common scenarios are integrated into I-FAAST’s heuristic algorithm."For that reason it takes a few days for the I-FAAST stats to be gathered before any noticeable change happens to your data. So far I have noticed that some of my files have been moved to the slowest part of the volume. Apart from that not much has changed.
In order to accelerate the effect, I created a utility to load several applications that I use on a regular basis. The utility is run at startup, and over the course of several minutes loads Word, Excel, Access, Paint Shop Pro, Outlook, VB6, Firefox, Access97, etc. etc. The timing between each one is set to around 30 seconds, so each app gets a chance to load before the next application starts loading. I didn't want to overwork my poor machine!
To get DK to learn about these common apps, I set the Task Scheduler to do a reboot every 30 minutes, and left my laptop doing reboots for the next 2 nights, effectively running each app around 30 times, in addition to normal use during the day.
The document about I-FAAST also refers to a freeware utility called readfile, which can measure the speed with which a given file can be opened/read. I have been using it to measure the effect of the I-FAAST placement on my system. So far the results have been inconclusive. Readfile doesn't work in a BartPE environment, so I have had to use Process Explorer from Sysinternals to make sure that no extraneous tasks are running during the measurement process. This is easier said than done.
Here is the "before" situation, with dozens of programs and processes running.
I shut down as many as possible before using readfile. I had to experiment a few times before getting reliable results. In each case I made sure the laptop booted up from scratch, and that none of the application data being tested was loaded on startup.
Here are my initial preliminary results. (Lower values are faster). The column "Access97" refers to the folder
"C:\Program Files\Office97\Office"
which contains the installation of Access97 that I use on a daily basis. The column "OfficeXP" refers to the folder
"C:\Program Files\OfficeXP\Office10"
which contains the bulk of the EXE and DLL files for Word, Excel, Outlook, Powerpoint and Access that is part of Microsoft Office 2002/XP. I usually only use the first 3 applications.
The column labelled "Data" refers to the folder
"C:\SQL\MSSQL\Data"
This contains 8.19GB of Microsoft SQL Server 2000 data that I use regularly. At the time I made the first DK2008 measurement, one of the files had 2 fragments that DK2008 decided not to defragment, for its own reasons. This was confirmed to me by a senior support person at Diskeeper Europe.
- The blue results are for Diskeeper 2008 Pro Premier with I-FAAST enabled, as well as automatic defragmentation, directory consolidation and statistics gathering all enabled.
- The purple results were generated after I ran JkDefrag 3.28 several times until it had moved and defragmented all the files on the drive. The total read time is 5.6% faster than the DK2008 result.
- The yellow results were generated after I ran PerfectDisk 8 after the JkDefrag changes. I did a boot time defrag first, and then a SmartPlacement defrag. The resulting total read time is 14.3% faster than the DK2008 result.
This is DK2008's drive map. The large red file is the SQL data file in two segments, as already mentioned.
After JkDefrag's complete defrag, all the files have been reorganised with the directories first, then the tell-tale 1% free space at the beginning of the drive, then the important files, then another 1% space, then the "spacehogs" at the end.
This is the disk arrangement after PerfectDisk 8 did its boot time defrag and SmartPlacement. Notice how the bulk of the free space is at the end, not spread all over the volume.
Each program has its own distinctive pattern. After running DK2008's defrag again, it moved some files to the end of the drive, as shown below.
Further testing will tell if more I-FAAST history and analysis can make a significant performance difference or not.
Update Tuesday 6th Nov: I was wondering if a readfile scan of the entire directory would not prejudice the DK2008 results because this would include "slow" files as well as "fast" ones, so I totalled up the read times for recently accessed EXE and DLL files only, i.e the ones I-FAAST is supposed to be optimising. Readfile has a problem calculating times for files larger than 4GB, so the "Data" directory is excluded.
The graph shown above provides the readfile times for EXE and DLL files only, using the same measurements as the first graph. The DK2008 data fared worse, not better; JkDefrag proved to be 13.4% faster, and PerfectDisk 8 is 20.1% faster. I will wait a few more days to see if the file placement improves, and then run the test again for DK2008. I cannot explain the difference in performance, because file placement by itself should not account for more than 15% at the most. Since we are dealing with single contiguous directories, the position of the directory shouldn't make such a difference either.
Craig Jensen's document on I-FAAST states
"This new technology adds even greater performance gains than previously available through defragmentation alone, improving file access and creation up to an additional 80%, with typical added gains between 10-20%."My data only refers to read access of defragmented files, so I still can't explain what is happening, and why I-FAAST read access would be 20% slower, not faster. More testing is needed.
A Closer Look at the DK2008 Review on 3DProfessor.org: Part 1 | Part 2 | Part 3 | How Fast is I-FAAST™? | Diskeeper 2008 Professional: Preliminary Results | First Impressions | Diskeeper 2007 Review | Benchmarks: DK2008 and DK2007
2 comments:
Insightful blog! I thought I might share a couple of expeiences having used DK2007 and DK2008:
1. I-FAAST is painfully slow to get organised properly - 2-3 weeks minimum on a machine I use 3-4hours daily. Once there, I found it do do a decent job - particularly noticeable in my boot time on a slower laptop.
2. One significant performance gain not measured here is multitasking performance: Reading a series of files only measures read speed, but when many files are read together (such as when starting all the bloatware we love and hate) there is also latency involved which can account for more than read speed. Consolidating frequently accessed files reduces latency which becomes most apparent in real-world apps. Your script would be a neat way to test this, but would suggest staring all the apps together without 30s delay and timing this - at least once before and after I-FAAST enabling.
3. Use DiskView from http://technet.microsoft.com/en-us/sysinternals/bb896650.aspx to get an insight into which files are actually where, and then redo the readfile experiment on files that really are at the beginning and end of disk to see impact of read speed. In my case DK does do a reasonable job, but as mentioned it takes forever to get there.
Cheers,
Mike
Xcuse me for a kind of "necroposting" but it's the one only a well enough argumented comparison i've found in the Net. Thank U very much!
But anyway i'm still curious about an efficiency of these algorythms for a "generic multipurpose PC". For example for a kind of a freelancer web-devels laptop with a 20-30-100 gigs of lossless music onboard -- a kind of data that absolutely should not b placed at fast areas but still accessed frequently. Such data can b relocated at the far end of disk with a defrags like MyDefrag.
It's very interesting what these algorythms will do in a such situation.
Post a Comment