Some confusion because of my question so to make it simple :
"What kind of storage do big cloud providers use and why?"
As far as i understand, however I am not able to find any kind of official Storage networking differences between typical data centers and clouds, all cloud providers are using DAS different from the typical data centers.
Even DAS has many disadvantages than SAN or NAS, i want to learn the details why clouds using DAS either for storage or application purposes.
Any resource or description will be appreciated to make me clear.
EDIT: While reading the paper "Networking Challenges and Resultant Approaches for Large Scale Cloud Construction,David Bernstein and Erik Ludvigson (Cisco)" they mention that,
Curiously we do not see Clouds from the major providers using NAS or SAN. The typical Cloud architecture uses DAS, which is not typical of Datacenter storages approaches.
But here there is a conflict: in my opinion and also stated later in the paper, Clouds should use SAN or NAS because of DAS is not appropriate when a VM moves to another server yet still needs to access storage from original server.
What are other reasons effects clouds to prefer DAS, NAS or SAN? what kind of storage do big cloud providers use and why?
This answer has been edited after the question was clarified.
Where "DAS" means Direct Attached Storage, i.e. SATA or SAS harddisk drives.
Cloud vendors all use DAS because it offers order-of-magnitude improvements in price/performance. It is a case of scaling horizontally.
In short, SATA harddisk drives and SATA controllers are cheap commodities. They are mass-market products, and are priced very low. By building a large cluster of cheap PCs with cheap SATA drives, Google, Amazon and others obtain vast capacity at a very low price point. They then add their own software layer on top. Their software does multi-server replication for performance and reliability, monitoring, re-balancing replication after hardware failure, and other things.
You could take a look at MogileFS as a simpler representative of the kind of software that Google, Amazon and others use for storage. It's a different implementation of course, but it shares many of the same design goals and solutions as the large-scale systems. If you want to, here is a jumping point for learning more about GoogleFS.
There are 2 reasons why SAN's are not used.
1) Price. SAN's are hugely expensive at large scale. While they may be the technically "best" solution, they are typically not used at very large scale installations due to the cost.
2) The CAP Theorem Eric Brewer's CAP theorem shows that at very large scale you cannot maintain strong consistency while keeping acceptable reliability, fault tolerance, and performance. SAN's are an attempt at making strong consistency in hardware. That may work nicely for a 5.000 server installation, but it has never been proved to work for Google's 250.000+ servers.
Result: So far the cloud computing vendors have chosen to push the complexity of maintaining server state to the application developer. Current cloud offerings do not provide consistent state for each virtual machine. Application servers (virtual machines) may crash and their local data be lost at any time.
Each vendor then has their own implementation of persistent storage, which you're supposed to use for important data. Amazon's offerings are nice examples; MySQL, SimpleDB, and Simple Storage Service. These offerings themselves reflect the CAP theorem -- the MySQL instance has strong consistency, but limited scalability. SimpleDB and S3 scale fantastically, but are only eventually consistent.
If you use DAS then availability is your problem
If they use DAS then availability is their problem. And if they're any good, they'll be using several layers of abstraction to ensure their problem doesn't become your problem. Rather than being hung up on how they choose to mount their disks inside their datacentre, the issue is whether or not the availability they guarantee in their SLA is adequate for your needs. Oh, and the real elephant in the room, what do you do if they go out of business (not likely for some providers perhaps but you should still consider it) and what do you do if you use this data locally and your interweb connection is unavailable - the latter one is substantially more likely than their choice of DAS directly leading to an outage.
Although I do not hold the answer on DAS vs SAN/NAS... There is many things to consider when looking for storage solutions.
The amount of data? If we're taking about Gb, fine, a NAS with a backup could do the job. If there is terrabytes of data, the price goes up very fast.
I think price is the main factor... if you have a SAN, you need :
And still, you have no redundancy at all. If you have access to a datacenter, things may be different.
Another thing to consider is the accessibility. Are you archiving? If so acessibility is not a problem, a couples of times per day/week/month you archive to your storage solution.
If on the other hand you have data that need to be accessed constantly, you quickly find a bottleneck of bandwidth, hardware limitation (such as I/O). But on the other hand if you have high amount of transfer with your data, there is good chances that an online storage solution cost you a lot.
ROI (Return on investment) is what all online storage solution vendor advertise on and they are often right, depending on the usage, of course.
Good luck.
(Opinion only, and vastly generalising.)
The difference is the layer of abstraction that you're looking at (generally).
SAN/NAS are usually providing you with a volume, on which you can install a file system. The value of this approach to the end-system is that you've outsourced the details of the physical hardware (e.g. RAID level, physical location, etc).
By contrast, cloud storage is usually providing you with an interface to a filesystem. The advantage here is that you can often get higher-order features for free (e.g. Dropbox does automatic versioning of every file, transparently).