The default allocation unit size recommended when formatting a drive in our current set-up is 4096 bytes. I understand the basics of the pros and cons of larger and smaller sizes (performance boost vs. space preservation) but it seems the benefits of a solid state drive (seek times massively lower than hard disks) may create a situation where a much smaller allocation size is not detrimental.
Were this the case it would at least partially help to overcome the disadvantage of SSD (massively higher prices per GB).
Is there a way to determine the 'cost' of smaller allocation sizes specifically related to seek times? Or are there any studies or articles recommending a change from the default based on this newer tech?
(Assume the most average scattering of sizes program files, OS files, data, mp3s, text files, etc.)
If you are looking for an good article I recommend
The Hows and Whys of SSDs by Robert Hallock
I linked to page 2, which contains the part which discusses clustering and block size.
I definitely agree with Hollock ("The How's and Why's of SSDs") when it comes to increased performance as cluster size approaches block size. In such a situation, you would have a minimal number of block reads and overhead per cluster request.
Having a cluster size smaller than the block size is not necessarily a huge performance hit but it will typically entail more overhead (since the SSD will read the block and REMOVE the portion of the block that is not in the cluster requested. This is even worse if the drive is fragmented and adjacent clusters on the same block are not part of the same file.)
In general increasing the cluster size to (but not beyond) the block size of the SSD will be beneficial. The loss (of course) is that you will start to lose space and as you mention, the $/GB of SSDs is much higher than magnetic media.
Depending on how much money you have, you can either:
or
Hope this helped :)
This is one anecdote, but it suggests no real world advantage in using a cluster size other than the default 4k. You may have more I/O requests to process but that is going to be negligible in the big scheme of things.
Both of your answers (and Hallock's) seam to contradict what Write Amplification Reduction is meant to accomplish - that is reduce unnecessary NAND wear and tear. Increasing the cluster size means more wasted writes to disk. Defragmenting an SSD you say? That is one of the deadly sins of using an SSD.