How would you calculate the size of the working set of storage that would live in a SSD/Flash tier of a tiered/hybrid storage solution? This would be used to gauge the size of the SSD/Flash tier.
By tiered/hybrid I mean something (e.g. a storage array) that presents storage that is made up of different tiers/types of disks such as SSD/Flash, SAS, NL-SAS etc. The 'something' then moves data around between the tiers of disk based on how active it is. More active data moves up to SSD/Flash and colder down to slower tiers.
I wouldn't calculate it at all, because it's next to impossible to do that. The size of working set varies massively depending on workload, contention and scaling.
You need to understand what's going to be going onto this array before you get even close - database workloads don't cache well as a rule (because databases already cache) where user 'junk pile' storage areas cache extremely well, and most other things are somewhere in between.
Likewise contention and scaling - if you've a really huge array, you can get away with a smaller - in percentage terms - cache pool, because you've fundamentally got more to work with in the first place. This is also true if you have mixed workloads - the cache-efficient 'junkpile' means you have more 'spare' for other applications.
As an extremely rough rule of thumb - I'd start with a (by capacity) 10/30/60 mix of fast/medium/slow, but then expect to add to the tiers as I get a better idea of workload. Assume approximately a factor of 10 in cost per gig between the tiers, and also a factor of 10 in cost-per-iop.
However this is only a starting point - I cannot predict your workload, and it takes a considerable amount of time and effort to properly understand.