We have 20+ Windows 2k3 physical servers that are used for some heavy calculation jobs few times a week.Users logon via rdp to them and run some jobs.Once these jobs are complete, users save results on local harddrives on these servers that are shared.Resulting files can be few gigabytes in total, with average size of 100M per file. Once files are ready , script from the scriptserver connects to each servers share and synchronizes files on these servers to a fileshare on Celerra NS20 NAS.Once this syncing is done, files are sent to customers from the filer to ftp server.
This setup has been in place for many years and now we are virtualizing our infrastrucuture so I am thinking about getting rid of these servers and replacing them with VMs to save on power, space and hardware support.Server do not need to be in high availablity setup, but they do need a lot of memory and application they run is not multithreaded.
Current infrastructure that can be used:
- Vsphere infrastrucutre on Dell PowerEdge M600 blades.We may buy 2 more blades to accomodate these servers
- CX3-10 Fibre Channel SAN. We may buy extra disk tray to accommodate these servers.I am inclined to persuade management to go for FC disks.
- Celerra NS20 filer connected to SATA disk tray on this SAN
- Cisco Catalyst 3560 gigabit switches
My main concern is how to reorganise storage.As all servers will be on the same SAN, all this fiddling with shares will be gone.I am thinking about mapping drives to location on the NAS filer and then syncing files to the same location, however this seems like a duplication of data on the filer.
Maybe ther is a more elegant way to do rearrange storage in this setup and someone has been in the similiar situation?
Are there are any major faults in my plan? What pitfalls should I expect?
Replacing 20+ Servers that do heavy calculation jobs with two blades might be fine but you do need to check that the total amount of concurrent processing power is sufficient. The M600 has been replaced with the M610 now - that's a dual Xeon 5500 that supports (IIRC) up to 96GB of RAM.
Configured with Xeon 5540's you have 8 real CPU cores and could bank on about 40Ghz of aggregate CPU power per blade, 80Ghz in total. Hyperthreading gives you some more but Virtualization takes some of that back. You might be able to bump up to the 5560\5570 for another 25% and these are based on the Nehalem EP's so you're getting quite a bit more bang for each Ghz. In general consolidating 20 Servers into this sort of kit would be fine but it all depends on how the numbers look for your systems.
I'm not 100% clear on your storage changes - even if all of the storage is on the same SAN in your redesign I don't see precisely what key difference that will make here - you wont be able to share the SAN volumes outside of the VM cluster directly, you'll still have to move them through the OS across to the filer from what I can see.
Remember that unless you have a cluster filesystem (Eg, CXFS on Linux or IRIX) backing the storage you still can't have two machines share the LUN.
While it might appear to work, massive corruption will result.