I'm head of the IT department at the small business I work for, however I am primarily a software architect and all of my system administration experience and knowledge is ancillary to software development. At some point this year or next we will be looking at upgrading our workstation environment to a uniform Windows 7 / Office 2010 environment as opposed to the hodge podge collection of various OEM licensed editions of software that are on each different machine.
It occurred to me that it is probably possible to forgo upgrading each workstation and instead have it be a dumb terminal to access a virutalization server and have their entire virtual workstation hosted on the server.
Now I know basically anything is possible but is this a feasible solution for a small business (25-50 work stations)? Assuming that this is feasible, what type of rough guidelines exist for calculating the required server resources needed for this.
How exactly do solutions handle a user accessing their VM, do they log on normally to their physical workstation and then use remote desktop to access their VM, or is it usually done with a client piece of software to negotiate this?
What types of software available for administering and monitoring these VM's, can this functionality be achieved out of box with Microsoft Server 2008? I'm mostly interested in these questions relating to Server 2008 with Hyper-V but fell free to offer insight with VMware's product line up, especially if there's any compelling reasons to choose them over Hyper-V in a Microsoft shop.
Edit: Just to add some more information on implementation goals would be to upgrade our platform from a Win2k3 / XP environment to a full Windows 2008 / Win7 platform without having to perform any of that associated work with our each differently configured workstation.
Also could anyone offer any realistic guidelines for how big of hardware is needed to support 25-50 workstations virtually? The majority the workstations do nothing except Office, Outlook and web. The only high demand workstations are the development workstations which would keep everything local.
This type of solution exists in a continuum.
On one end of the spectrum you have client computers running a "thick" operating system (like Windows or a desktop Linux distribution) and connecting via client software to hosted applications (via RemoteApp shortcuts and the Remote Desktop Protocol (RDP), or via Citrix ICA protocol).
In the middle of the spectrum you have clients connecting via these same protocols to full-blown desktop sessions (rather than a single application), but using a shared operating system installation. This is typically the world of Windows "Terminal Services".
On the far end of the spectrum you have what's typically known as a Virtual Desktop Infrastructure (VDI) where client devices are very stripped down and only host client software to connect to a hosted operating system instance.
All of these situations are physically feasible, but you'd do yourself a favor to start investigating the licensing costs before you go down the road of spec'ing servers, etc.
The licensing costs in the Microsoft world include either Terminal Services Client Access Licenses or Windows Virtual Enterprise Centralized Desktop (VECD) licenses of operating systems to contend with for each device or user accessing the VDI solution. Licensing for your desktop application software, depending on where on the spectrum you're falling, may also be different than you currently use and this necessitate additional license purchases.
It's likely that you're going to find that the acquisition costs of a VDI infrastructure are similiar, if not more expensive, than going down the traditional "thick client" route. Phyisically and pratically using thin-client devices sounds like a "win", but software licensing expense has traditionally more than made up for any hardware cost savings, which leaves only "soft cost" management and TCO savings as justification.
Edit:
Ryan Bolger hit it right on the head with his answer (and I +1'd him) with respect to "soft cost" savings, which you're right to identify as the place to save money.
Learning how to centrally deploy software, manage user environments, and generally maintain the hell out of your network using Group Policy will build your personal knowledge of the "innards" and operation of a Windows network and will have far fewer "moving parts" than a VDI infrastructure. Even if you had a VDI infrastructure, frankly, I think you'd still be able to leverage immense benefits from Group Policy-fu.
VDI and remote application delivery is a great solution for very task-specific application, or delivery of applications over slow or unreliable network connections (think "shared Microsoft Access database over a T1-based WAN"). I don't think that desktop virtualization, at least in the current incarnation as an excessive-licensing-fee-based minefield, is "the answer".
I'll even jump out on a limb and say that, with proper "care and feeding" maintenance of very large fleets of client computers running Windows isn't really all that hard, using the built-in tools in Windows Server, WSUS, good knowledge of scripting, and an understanding of how Windows itself and your application software works. Automating your client computer build, removing users' Administrator rights, and getting a handle on your OS and application update deployment infrastructure will take you leaps and bounds ahead.
I'd like to build a bit off of Evan's answer regarding the different ways to remotely host applications.
Your primary concern seems to be about reducing the administrative overhead involved with managing a bunch of disparate workstations and their individual software installations. You don't need to move to a remotely hosted application infrastructure to accomplish that goal.
With a single server setup as a domain controller and all of your workstations joined to that domain, you can do just about everything you need right out of the box. The domain itself handles centrally configured user accounts. Group Policy can handle configuring all of the system settings on the workstations. And Group Policy Software Deployment can handle your application installations. The built-in Windows Deployment Services combined with the free Microsoft Deployment Toolkit can even give you your OS deployment solution. WSUS is also free and can handle your OS and Microsoft software patching.
There's just a ton of stuff you can do with nothing more than a single server OS license and your workstation OS licenses. It all has a bit of learning curve, but it's no more difficult than the things you'll have to learn with a remotely hosted app or OS solution.
We are in the mid stages of planning desktop virtualization to a few hundred users, and there are a lot of subtle gotchas. One is the fact that the alleged "dumb terminals" are not so cheap, and of course need software patches as well! However, less than a full OS install sure. Next gotcha is some exec that "has" to have something that the dumb terminal does not do and blows the model away. Then remote access. Then VoIP. Then, VMWare is more expensive then you thought. Sheesh ...
We have used both XenServer from Citrix and VMware ESX to virtualize workstations. XenServer is free and I believe the ESXi version is as well. Citrix also makes a product called Provisioning Server which makes it very simple to create, modify, and deploy virtual workstations with shared configurations.
As mentioned above, you'll want redundant servers if you go this route to help prevent outages.
Having said all this, it's been my experience that virtualizing workstations is only a good idea when you have a specific reason for doing this - for example, workstations at a remote site where you won't be able to go out and deploy software updates. For general computing, it's more of a hassle than it's really worth, and you won't end up saving that much money. And, especially for a small organization, the KISS principle would generally overrule using thin clients.
I'd look a lot at the Sun Ray desktop boxes. They work quite well (assuming you have enough backend horsepower), even in Windows shops, and they're fairly cheap compared to normal desktops.
The biggest question in my mind is: Can you be OK with the possibility of losing EVERYTHING in one go? Is your boss OK with that?
If you put everyone's work on one server, (I'm assuming you'll have proper backups, etc) it's still possible for that server to fail. Is it OK for the failure of one server to take out the entire company for a day or so while you replace it, rebuild it, and get it back in operation?
I'd never even consider that solution, just because it creates such a wide-acting single point of failure, but your mileage may vary.
The RHEV VDI is about to come out, is features spice (a protocol that beats rdp/ica) and quite a few other features.
have a look at http://www.redhat.com/ and of course http://www.redhat.com/virtualization/rhev/desktop/
One of the things most folks don't get when going to a VDI is that your adminstrative costs don't necessarily go down, they go up as now you get to manage 2 distinct desktop environments for every user. One of the big cost saving benefits of VDI is in software management and hardware management, but not because it's virutal. VDI is usually a great way to force IT to manage software deployment better and you generally get a more locked down environment (no more developers installing tools as they please on their desktop). If you try to migrate a mismangaged desktop environment to VMs it's far more likely to be more expensive than buying workstations and properly managing your environment. In addition their is usually a cost associated with the underlying hypervisor, and that takes additional management skills.
The case study of Largo, Florida, may prove informative. They migrated a significant number of non-technical users to a Linux based thin-client network design and realized significant cost savings as well as increased productivity (due to reduced workstation downtime and improved user data backup) as a result. Slashdot profiled the city several years ago. Since that article, it seems that the city migrated to a Citrix solution.
What you are describing is best served by Terminal Services, rather than virtualisation. Regardless, I think that by the time you get the costs for the server(s) to able able to handle such a load, plus the cost of thin clients, you'll find it's a lot cheaper to have separate workstations.
The maintenance of separate machines is no harder or more work than that of either TS or virtual machines, when done properly. On the other hand, having people be able to work when the server has down time is a huge plus in most cases.