File fragmentation causes performance problems when reading files, while free space fragmentation causes performance problems when creating and extending files.
Actually, free space fragmentation causes a performance hit only if the drive is so full that newly created files won't fit into any existing continuous segment of free disk space -- IOW, they must be fragmented to fit into the free spaces between other files. The simplest way to avoid this is to use bigger partitions to begin with. Thus, replacing a single-partitioned 750 GB HD with 300 GB one is a bad idea.
HFS+ is not very good at keeping free space contiguous, which can, in turn, lead to large files
becoming very fragmented, and can also cause problems for the virtual memory subsystem on
Mac OS X.
Actually, OS X intentionally avoids keeping free space completely contiguous on HFS+ volumes to avoid prematurely filling small areas of free space. Together with delayed allocation (introduced in 10.2), this substantially reduces small file fragmentation before it occurs. Also, starting somewhere in 10.4, VM was redesigned not require continuous free space for its files, so this much of the manual is badly out of date, & in any event would not apply to the O.P.'s Snow Leopard system.
Whilst HFS+ is good at keeping individual files defragmented, mechanisms like Software Update
may result in files that are components of the same piece of software being scattered across the
disk, leading to increased start-up times, both for Mac OS X itself and for applications software.
This is a form of fragmentation that is typically overlooked.
What the makers of iDefrag don't want you to think about (because after all, they want you to buy their software) is that processes share many of the same component files so it is impossible to group all of them optimally for every piece of software, that adaptive hot file clustering constantly optimizes the location of the most critical of these files, that aggressive read-ahead/write-behind caching & other techniques like those mentioned above built into the system greatly reduce the theoretical performance robbing effects of file fragmentation, & so on.
Don't get me wrong. iDefrag has its place, especially if you tend to fill up drives with large & often changing files. But for the average user, it offers very little in terms of real, sustainable performance improvements, & it is highly unlikely that any time saved from this will offset the downtime required to run it, especially if they avoid filling the startup volume to near its capacity.
Obviously, the larger the volume, the easier this is to do, which is why my advice remains not to partition large drives into multiple volumes without good reason.