Our server admin has hatched a plan to move all storage to a central disk cabinet, and have every server use that. We have a central SQL Server that is heavily use (200+ concurrent users) and I am concerned that this might seriously impact performance.
Is it safe to use the central disk cabinet to store the SQL Server databases? Will this impact performance, or is it possible to have a fast SQL Server setup that shares a disk cabinet with many other servers?
Performance is not a simple matter of saying "Hey, I've got a SQL server with 200 users, how much performance load do I have?". What do you know about the current performance of the server, specifically disk performance? That's where I would start.
Also, saying that the server is heavily used because you have 200 users hitting it is also not the best approach to analyzing performance. It's not simply the number of users, it's the number of transactions, the nature of the transactions, etc. I have a SQL server with 2,000 users hitting it but I don't consider it heavily used, based on the performance metrics that I monitor and graph.
Database performance (SQL Server in particular as discussed below) can be slow on SAN equipment. Generally, a direct attach storage subsystem on a database server will be cheaper and faster. One option is that you can mount a SAN volume on the server and backup the database onto this volume. There are significant pros and cons to using a SAN for a database workload:
Pro: If you want to clustered servers with hot failover then you need a shared disk. Typically this is implemented with a SAN, although there are other architectures.
Pro: If you want to use blade servers, SAN storage is the only reasonable option.
Pro: SANs centralise disk storage and backup management.
Pro: SANs (usually) have no single point of failure in the hardware (one of the key design features of fibre channel) so they can be good for systems that have high availability requirements.
Con: Getting decent database performance out of a SAN that is tuned for a general purpose workload can be problematical. Often, vendors will recommend getting a SAN dedicated to the application, which sort of defeats the purpose of having a SAN in the first place.
Con: SAN equipment is expensive. Often, a SAN will cost more than the server equipment that's attached to it, especially when you count running costs and backup.
Con: SANs form yet another control point of the sort that is attractive to empire builders. If you find yourself needing another disk shelf, the up-front cost can also generate internal friction unless there is funding specifically allocated for it.
My gut instinct is that you should make the infrastructure people prove that the system will perform adequately on the SAN (i.e. benchmark it) and produce a credible business case before allowing them to move the server onto it.
In particular, don't buy more SLA then you really need. Actually implementing high levels of reliability is very expensive, and just putting your server on a SAN isn't going to do this by itself, so this is a spurious argument unless it is backed by a genuine requirement.
So long as it's a good enough central SAN box then yes, you'll want it to have dual controller and disk paths plus lots of host ports but yes, it could very easily be much safer.
It will certainly impact performance, which may go up or down based on SAN configuration but almost certainly won't stay exactly the same. Depending on config you could benefit from better quality, faster disks with more cache in better RAID modes working over faster links with multiple paths - or it could go the other way too.
Basically it all depends on how much time, money and expertise this guy has available but there should be nothing whatsoever to fear if he has enough of all these.
It can be done, but it is the kind of thing that needs to be done with eyes open. Someone really needs to do sufficient performance monitoring of the existing database server to figure out how it is performing now, and then compare those metrics against the centralized storage. You could be just fine. Or you could already be bottlenecking on your existing storage I/O subsystem, and adding a bunch of spindles will resolve it.
To give you some idea of what I'm talking about, if your existing storage is a mirrored pair of 15K drives, it is reasonable to expect between 150-500 concurrent I/O operations depending on the exact access patterns (highly random will be lower, but your DB backup/export processes may be sequential and therefore higher). If the central storage is 24 15K drives, you can expect to get between 3000 to 8000 concurrent I/O operations (storage network willing). Depending on what else you're competing with, your database may end up with a lot more headroom I/O wise.
But you have to figure out your current performance to be certain.