I guess that this is an old question, but all answers I found on the net was very diverted, so here goes...
I need a dedicated volume for tens of thousands of ~10MiB and ~50MiB files. I suspect that a large NTFS cluster size will speed up the file server, but will a large cluster size do any other harm like slow things down, or waste lots of space?
Should i use 64K clusters, 32K or just plain default (4K size)?
To my knowledge, NTFS itself does not have any performance problems associated with larger cluster sizes.
If you're really looking to eek out all the speed you can, I'd recommend simulation and benchmarking. How your application reads data (4K blocks, 8K blocks, etc) is going to make a difference, as is the cache hit pattern on the NT cache and the underlying RAID cache. The disk / storage hardware (RAID layout, SAN configuration, etc) is going to make a difference, too.
Ultimately, the behavior of the application is going to be the biggest dictator of performance. You see "planning guides" for various applications (Exchange, SQL Server, etc) out on the 'net. All of the serious ones are based on real-world benchmarking with load simulation. You can write "rules of thumb", but with any given system there may be quirks in implementation at lower levels that turn rules of thumb on their ear.
If your application is suited to simulated work, spin up a simulated corpus of files and simulate a workload on them, using various filesystem / RAID / disk configurations. That's going to be the only way to know for sure.
(Aside: Does anybody else find it funny to hear a 10MB file called "small"? God, I'm old...)
Easy answer: If the majority of your files will be 10MB or more then I'd recommend you use as large a cluster size as possible, but this may be a little wasteful space-wise.
Complex answer: You should analyse your disk's cache, your disk controller's cache, your OS's cache and the actual files and their sizes. This will be better, but harder to get right.