This is a request for 2 different types of information:
- Is there a formula that can/should be followed to calculate risk vs cost of server storage?
- What factors should one take into consideration when deciding on what kind of server storage to purchase?
By server storage, I mean choosing between getting SSDs (consumer or enterprise grade), consumer spindles, raptors, sans, etc.
No, there isn't. Even a disks MTBF doesn't mean anything (other than the warranty to replace the disk once it has failed) if you don't have backups and redundancy.
Speed vs. Capacity vs. Cost are the biggest considerations.
There was a big study published (few thousand hard drives) that showed no statistical difference in quality of enterprise and regular hard drives: http://www.usenix.org/event/fast07/tech/schroeder/schroeder.pdf
As @gWaldo said, you should look at speed/performance/capacity/warranty vs. cost
EDIT: sorry I add it this late, but I couldn't find it by google and needed to return to my home PC to find it
Each situation's tradeoffs are potentially different. Different services in the same corporate environment, or identical services in different environments, may have different tradeoffs based on differing priorities.
You have to figure out your own tradeoff between cost, speed, capacity, redundancy, acceptable downtime (both scheduled and unscheduled), etc. That applies not only to server storage, but to many other aspects of computer and other systems.
You can characterize risk, however the "formula" you use it in is completely your own. First, define what you are characterizing as a risk. Are you "down" if you ever lose access to your storage from any applications? Or only if you lose more than an hour of production? What about how much data you lose during a failover?
Start by defining your recovery point and time objectives, and then analyze the ability of your storage to meet them.
For example: at work, we have a recovery time objective of 30 minutes, and a recovery point objective of seconds. This means in order to have 99.999% uptime ("five nines availability"), we require one copy of our data in our datacenter, two copies on an identical storage array that we'd switch to in the event of a failure (one golden and one working), and multiple backups per day to tape or tape-like devices.
Somewhere, an actuarial mathematician has a formula that analyzes the results of our disaster recovery tests which trades off against how much the insurance company charges us.