Most of current block storage systems allow you to create volumes that span across multiple disks, but those disk must be on the same server. Is there any system that can create a volume that spans across multiple disk on multiple servers, and that volume can grow by adding more servers to the cluster?
Edit: for example, I need to write log continuously to a remote file. The file is never closed, so a caching model like those in object storage systems is not feasible. I'm looking for a solution that is similar to a SAN system, but 1 volume can be scaled beyond a single server.
We currently have Brocade 200E fiber switches, connecting 2 EMC CLARiiONs to 4 VMware ESXi hosts. We are looking into new storage options using iSCSI with our existing ethernet network, including the possibility of gradually upgrading to 10 gigabit. I have been searching for any kind of 10GBASE-T switch, that is backwards-compatible to 1 gigabit, and includes the fiber channel ports necessary to connect to the Brocades/CLARiiONs as well.
I am not very experienced with storage administration and fiber channel, so I understand this question might have an obvious answer of "no", but it did seem like the Cisco Nexus 5010 with a module (N5K-M1008) might work.
I also thought about using a 10Gb switch (Dell Powerconnect 8024) that has the SFP ports for uplink to other switches. Are these SFP ports capable of connecting to the fiber ports on the Brocade (not necessarily just on this Dell switch, but any switch like this), or are they designed only to work as uplink to the same model?
Any insight into the specifics of fiber switching, and how fiber ports are classified would be helpful.
EDIT: I've held off commenting because I was learning a good deal from the answers, and wanted to be able to clarify as best I could. I don't necessarily need a simple switch, but more a single device that can do this (so a Cisco Nexus with the necessary modules could work). Also, it seems like for this to function, I would need my new storage to be able to support FCoE over the 10Gbps links, so that it could then reach my hosts over FC.
I understand that getting the zoning right on the FC switch might be overly complicated, but I want to see if my understanding of the technologies is now correct. So, assuming this could be accomplished, would a Nexus switch that has the 10Gbps ports, as well as FC ports from a module that connect to exisiting FC switches, be able to connect a new storage device (that can speak FCoE) to my existing hosts?
What storage connection method should one prefer to use for connecting ESX servers to a shared storage server with 10GbE links?
Speficically, I have 2 ESX servers for VMware ESX and one server for shared storage.
The storage server is 2 x Xeon E5504 2GHz, 24GB RAM, 12xSSD + 12xSATA and battery backed RAID. The ESX servers are much the same but with 2 small SAS drives.
All servers have 10GbE adapters connected like so:
I have a licence for ESX 3.5 but for testing purposes I am currently running ESXi 4.1. The storage server runs Windows 7 for testing purposes.
I am aware about at least 3 methods:
1. ISCSI
2. NFS
3. FCoE
Which one would you recommend to choose and why?
The web application I'm working on, will be used to upload/download large number/amounts of smaller size files - I'm looking at close to 1B files with total size of > 10Pb. I'm currently struggling with deciding the scalable architecture that would support such amounts. And here's my question - is there a way of building some sort of storage that would be seen by a windows server as one huge (10Pb and up) network storage drive, so I can write all the files to subfolders of that virtual drive? And how would it perform?
Right now I'm trying to understand if that's even possible, or if I have to implement software level sharding - writing files to different drives based on some key.
I'm a developer, not a sys admin, so I apologize if it's a naive question, and thanks in advance for patience in explaining me possibly trivial things.
Andrey
NetGear's ReadyNAS 2100 has 4 disk slots and costs $2000 with no disks. That seems a bit too expensive for just 4 disk slots.
Dell has good network storage solutions too. PowerVault NX3000 has 6 disk slots, so that's an improvement. However, it costs $3500; the NX3100 doubles the number of disks at double the price. Just in case I'm looking at the wrong hardware for lots of storage, the trusty PowerVault MD3000i SAN has a good 15 drives, but it starts at $7000.
While you can argue about support from Dell, Netgear or HP or any other company being serious, it's still pretty damn expensive to get those drives RAID'ed together in a box and served via iSCSI. There's a much cheaper option: build it yourself. Backblaze has built it's own box, housing 45 (that's forty five) SATA drives for a little under $8000, including the drives themselves. That's at least 10 times cheaper than current offers from Dell, Sun, HP, etc.
Why is NAS (or SAN - still storage attached to a network) so expensive? After all, it's main function is to house a number of HDDs, create a RAID array and serve them over a protocol like iSCSI; nearly everything else is just colored bubbles (AKA marketing terms).