I prefer Debian GNU/Linux as operating system for servers. I generally tend to think stable is the best choice. But I use unstable/testing on the desktop.
Are there real cases of servers using Debian unstable/testing?
I prefer Debian GNU/Linux as operating system for servers. I generally tend to think stable is the best choice. But I use unstable/testing on the desktop.
Are there real cases of servers using Debian unstable/testing?
Two years ago we needed PHP5 in production environment. Debian stable wasn't yet there and for some reason, backports was not considered. We let our hosting company install testing and it worked nicely.
However now that we need to update to Lenny, we just can't upgrade the production system on the go, we've to clone the system, upgrade, test, etc. because the version numbers of many applications has greatly increased over the last two years.
So this now creates work for us (internal work hours) and also payload for our hosting comapny which we've to pay.
Lesson learned; or at least, next time I'm prepared to what comes in the future.
If it's production, then don't use unstable. Use stable and backports instead, and testing if you must. Testing is ok for a desktop machine you can afford to break for a day. It's not for production.
Also, Zoredache mentioned apt-pinning. It's a little confusing at first, but worth learning. If you go that route, start with reading the apt_preferences man page. The key to apt-pinning is keep it simple and start small.
One last thing about the relative stability of the releases. Stable is always rock-solid, and testing is usually as reliable. When there's an impending release, testing gets much more stable and unstable gets a little stagnant. After a release, testing becomes a little less stable and unstable becomes buggy again.
I have used several unstable/testing packages, but I don't use testing/unstable. You can use things like Apt Pinning, backports, or you yourself can backport the specific packages you need for your environment.
I've used testing on servers before, but these days backports has removed much of the reason for doing that.
stable + backports is the way to go: we even do custom backports for select packages, using the same sort of versioning scheme as backports.org
Backports can cause problems as they introduce applications which can mismatch libraries which can thus cause problems later on down the line. Stable is often outdated and thus I quite often use testing on internal machines (ones that can't be accessed externally).
I had a problem with unstalbe once and I will never run it again on production. proftpd after apt-get upgrade went crazy and was unkillable. It survived multiple kill -9's. We tried to kill it and one of my collegues reminded himself about killall5 - the most powerfull kill command on linux. He then invoked it without any parameters to see the help "the proper syntax would be", but the help never came... The fun part is that everything has died, but proftpd port was still open.
You can have half-stable half-unstable if You configure it well, but as it was already mentioned - backports solve most problems quite well. Production boxes are not worth the risk. It usually works fine, but sometimes You can have a bad luck.