It's a bit of a yes, no answer. Useful in certain circumstances but it's less of an issue than it was with FAT or regular HFS. All filesystems will fragment but newer ones are more resistant to fragmenting so badly.
Speaking for Mac OS X specifically HFS+ does a decent enough job of trying to keep things from being fragmented compared to older systems but it still happens just not on the same scale. The OS itself also defrags "small" (20MB or smaller) files on the fly since 10.3 (Panther).
Fragmenting still happens and you can see performance drop because of it, especially in video editing systems or a workflow that requires the ability to read or write large files quickly to the disk. For your standard user - a near non-issue.
The most popular options for defragmenting a hard drive for OS X I've used and run across are:
Cloning the hard drive to another drive and back. This is done using Carbon Copy Cloner or SuperDuper and requires an extra hard drive. If done as part of a backup routine the time hit may not be terrible but it's free to do it this way.
iDefrag, Drive Genius and a handful of other utilities will all defragment your hard drive as well. Personally I prefer iDefrag.
Yes, they are. People will give you lies like "UNIX filesystems never fragment." They are liars, and you should listen to me instead. Files like sqlite databases, as used by firefox, will quickly fragment as they deliver small writes regularly as you use the browser. At one point my profile had a sqlite database with over three thousand fragments.
These sqlite databases contain the browser history, and are used in places to suggest text strings to you, like URL completion or form autofill. If they are fragmented, you will suffer. Some of this may be masked by OSX's decision to implement POSIX fsync() as a no-op (allowed by the standard, but not very nice). So it's not like you need to edit video to trigger bad conditions, just a large history database that properly calls fsync() on OSX.
On Ubuntu you can check how fragmented a file is with the utility filefrag, in package e2fsprogs. It requires root permissions, but gives you a view of how many non-contiguous regions a file has. As the package name suggests, it is not ext4 aware (yet). Hopefully, ext4's delayed allocation and extents support reduces fragmentation in the wild.
It depends on what filesystem are you using and most importantly on how are you using it. Most modern filesystem are less prone to fragmentation, but defragmentation is always useful.
You can use xfs_fsr to defragment XFS filesystems. It has some limitations, but it's better than nothing.
This is a religious issue. IMO, fragmentation is only an issue for specific workloads, and it hasn't been terribly relevant since NT4.
Exceptions that I've encountered are situations where you have lots of small writes mixed with large writes. One example that comes to mind is a busy windows file server with users doing stupid things like running active PST files on the server. Another would be a linux pop3 server with mailboxes in Maildir format.
HFS+ does fragment, all filesystems do. However, it doesn't appear to suffer from it, at least not to the extent that NTFS / FAT32 do.
A caveat-- I stopped noticing performance drops from file fragmentation about 5 years ago, at least as far as local files were concerned. SATA bandwidth and a 7200RPM HDD make the issue pretty much un-noticeable, IMO.
It is a chapter from Mac OS X internals book.
There are described "Built-in Measures in Mac OS X Against Fragmentation"
and also fragmentation checker tool is presented.
Analysis of fragmentation on 5 apple computers are also done.
It's a bit of a yes, no answer. Useful in certain circumstances but it's less of an issue than it was with FAT or regular HFS. All filesystems will fragment but newer ones are more resistant to fragmenting so badly.
Speaking for Mac OS X specifically HFS+ does a decent enough job of trying to keep things from being fragmented compared to older systems but it still happens just not on the same scale. The OS itself also defrags "small" (20MB or smaller) files on the fly since 10.3 (Panther).
Fragmenting still happens and you can see performance drop because of it, especially in video editing systems or a workflow that requires the ability to read or write large files quickly to the disk. For your standard user - a near non-issue.
The most popular options for defragmenting a hard drive for OS X I've used and run across are:
Cloning the hard drive to another drive and back. This is done using Carbon Copy Cloner or SuperDuper and requires an extra hard drive. If done as part of a backup routine the time hit may not be terrible but it's free to do it this way.
iDefrag, Drive Genius and a handful of other utilities will all defragment your hard drive as well. Personally I prefer iDefrag.
Yes, they are. People will give you lies like "UNIX filesystems never fragment." They are liars, and you should listen to me instead. Files like sqlite databases, as used by firefox, will quickly fragment as they deliver small writes regularly as you use the browser. At one point my profile had a sqlite database with over three thousand fragments.
These sqlite databases contain the browser history, and are used in places to suggest text strings to you, like URL completion or form autofill. If they are fragmented, you will suffer. Some of this may be masked by OSX's decision to implement POSIX fsync() as a no-op (allowed by the standard, but not very nice). So it's not like you need to edit video to trigger bad conditions, just a large history database that properly calls fsync() on OSX.
On Ubuntu you can check how fragmented a file is with the utility
filefrag
, in packagee2fsprogs
. It requires root permissions, but gives you a view of how many non-contiguous regions a file has. As the package name suggests, it is not ext4 aware (yet). Hopefully, ext4's delayed allocation and extents support reduces fragmentation in the wild.As far as I know, unix file systems like EXT or HFS don't suffer from fragmentation like FAT or NTSF, at least not in the same order of magnitude.
Read more about it here and also check this apple support page about disk maintenance
It depends on what filesystem are you using and most importantly on how are you using it. Most modern filesystem are less prone to fragmentation, but defragmentation is always useful.
You can use
xfs_fsr
to defragment XFS filesystems. It has some limitations, but it's better than nothing.Mac OS X already defragments files less than 20 Mbytes in size. See this article:
Panther has automatic defragging?
ext2/ext3 can have tremendous fragmentation of free space. It can be checked with e2freefrag utility.
This is a religious issue. IMO, fragmentation is only an issue for specific workloads, and it hasn't been terribly relevant since NT4.
Exceptions that I've encountered are situations where you have lots of small writes mixed with large writes. One example that comes to mind is a busy windows file server with users doing stupid things like running active PST files on the server. Another would be a linux pop3 server with mailboxes in Maildir format.
HFS+ does fragment, all filesystems do. However, it doesn't appear to suffer from it, at least not to the extent that NTFS / FAT32 do.
A caveat-- I stopped noticing performance drops from file fragmentation about 5 years ago, at least as far as local files were concerned. SATA bandwidth and a 7200RPM HDD make the issue pretty much un-noticeable, IMO.
Link
It is a chapter from Mac OS X internals book. There are described "Built-in Measures in Mac OS X Against Fragmentation" and also fragmentation checker tool is presented.
Analysis of fragmentation on 5 apple computers are also done.