I am in the process of setting up new servers for an organization. What are the standards or best practices for setting up a new environment with Development, Testing, Staging, and Production (or I'm open to other levels I'm not familiar with)? Additionally, I've heard of organizations breaking out servers into SQL, Application, Web Server, etc. Where can I find good examples of possible solutions for server setup?
Is virtualizing these environments among a few physical boxes a good practice?
I've searched online for some ideas of how other organization have their environment set up, but I'm not finding anything specifically helpful. I welcome any links you can point me to that discuss building an entire enterprise solution for a small to medium company.
I just found this link: http://dltj.org/article/software-development-practice/ I'd like to find more articles like this if anyone knows of any good ones they can point me to.
Before you down-vote my question, please post comments to let me try to explain more. I may just not know enough to ask the right questions.
This is a pretty loaded question. My general advice is to focus your attention on managing complexity and allow the system to grow organically.
Virtualization:
You really want to avoid server sprawl, and these days, everything is virtualized. Pick a platform that will allow you to add virtual servers quickly, as well as manage them efficiently. One trend I've seen is having two (for example) AIX or VMWare clusters, one for prod, one for non-prod. The non-prod one is used for all the dev, testing, and staging environments. These environments are perfect for web servers or application servers, but I'd try to avoid putting large, growing production databases as a VM (at least on windows).
Databases
These can easily get out of hand whenever they need to share resources with other servers. Always have databases running on a dedicated OS, never shared with an application or web server unless there's a really good reason for it. Whether you use a VM or hardware is the only question.
You want a scalable infrastructure that won't cap you if you ever need to, for example, move to a clustered solution. Many databases are going to be fine in a VM, but for the few that will eventually need more horsepower than is convenient to provide in a VM environment, you'll find yourself wishing you'd put them on raw hardware instead.
If you're not talking about windows, then some of these guidelines won't be relevant. It's common accepted practice to put large growing databases as LPARs in an AIX hypervisor, for example.
Storage
You can't have real virtualization (with VM mobility and host clustering) without shared storage. Prod, dev, testing, and QA servers all look the same to your storage, however you might want to invest some time into finding a way to prioritize your prod. It is a very bad idea, for example, to have a heavily taxed prod database sharing disks (raid sets, pools, whatever) with a dev server. Dev can hit the disks just as hard as prod, sometimes, and the last thing you need is to figure out whether some sort of test is what's slowing your production down.
Have someone who knows your storage sit down and analyse all the potential bottlenecks (ports, cache, controllers, disk, etc) and do your best to prevent contention for as many of these as possible between prod and non-prod.
That said, sometimes the application people need to run dev benchmarks to help quantify the effects of a new patch or something. In this situation, you might need to be able to offer them similar (or at least quantifiably different) amounts of storage horsepower.
What are you needing this environment for? Vendor software or your organization doing their own development?
Dunno if this will help but both HP and Dell would fall all over themselves to come in and assess your current datacenter and give you a recommendation to revamp or create from scratch. The forum readers can give good close answers but without seeing "what you have and where you are to what you want and where you need to be" will be difficult to give you a solid answer. Do yourself a favor and stick with one hardware vendor because administration reasons.
We have our datacenter geared with this in mind (We have the hardware in place to do it)
VMWare environment C7000 Hp blade enclosure with EMC SAN backend, 8 gig fiber connection.
This allows us to limit spawl, electricity use and air conditioning costs. It would be used for test machines, Proof of concept servers, production servers that do not need hardware unique to the application (USB dongles, Fax boards, ect)
Physical Blade Server environment C7000 Hp Blade enclosure with 16 blades HBA connected to EMC SAN backend via 8 gig fiber.
These would be for machines that require a large quantity of RAM and CPU but have no unique hardware additions. Virtual machines are fine except when they require a huge amount of CPU or RAM. VMWare allows for vmotion moving the vmserver to a host machine as to balance out hardware usage. The VM real estate is only cost effective when used to its max. Meaning, more smaller machines instead of a few large ones. This also depends on the system you are trying to stand up.
Physical Server (1U to 5U)
HP DL360 - DL 5xx servers. Special hardware like 4x 8 core CPUS and 256 Gigs ram, serial cards for telecomm interfaces, or High end fax boards attached to multiple phone lines. Included in this group would be servers that the vendor has required large local storage.
This is an example but not a complete answer. Seriously, talk to a hardware vendor and let them give you an idea of where you are and how to make it it better/more efficient.
What are the standard or best practice for setting up a new environment with Development, Testing, Staging, and Production.
This depends on budget among other considerations. Not sure if there are standard but you would want to keep OS and other software's same on all boxes. Use automation tools like Puppet to automate and standardize your builds.
Is visualizing these environment among a few physical boxes a good practice?
Virtualization? Yes. Great practice. But need to validate your configs if they are okay to run as virtual machines.
I've heard of organizations breaking out servers into SQL, Application, Web Server, ect. Where can I find good examples of possible solutions for server setup?
Probably others can chime in, but imho, you would want to install different components on different servers for multiple reasons among OS and application upgrades and availability.