It may be safe—or, more accurately, the level of risk may be within your range of comfort. The level of acceptable risk will depend on several factors.
Do you have a good backup system that will allow you to quickly revert if something breaks?
Are you forwarding server logs off to a remote system so that if the box goes belly up you will still know what happened?
Are you willing to accept the possibility that something may break and you may have to do a quick restore/revert on the system if something fails?
Have you manually compiled anything on your own, or did absolutely everything installed on your system come from the official repositories? If you installed something locally, there is a chance that an upstream change may break your local maintained/installed software.
What is the role of this system? Is it something that would barely be missed if it died (e.g. a secondary DNS server) or is it the core piece of your infrastructure (e.g. LDAP server or primary file server).
Do you want to set this up because nobody responsible for the server has the time to maintain the security patches? The potential risk of being compromised by a un-patched vulnerability may be higher then the potential for a bad update.
If you really do think you want to do this, I suggest you use one of the tools that already are out there for this purpose like cron-apt. They have some logic to be safer then just a blind apt-get -y update.
It is generally safe, but I wouldn't recommend it for a simple reason:
You lose a known state.
In production environment, you need to know exactly what's on it, or what's supposed to be on it, and be able to reproduce that state with ease.
Any changes should be done via Change Management process, where the company is fully aware of what they are getting into, so they can later analyze what went wrong and so forth.
Nightly updates makes this kind of analysis impossible, or harder to do.
I might do that on stable, or on Ubuntu, but not on an unstable branch, or even the testing branch.
Though, when I put my sysadmin hat on, I believe that I should be manually applying all updates, so that I can maintain consistency between servers -- and also so that, if one day a service breaks, I know when I last updated that service. That's something I might not check if updates were proceeding automatically.
We use stable and schedule apt-get upgrade for Tuesday evening on most of our Debian systems (coincides with our Microsoft "patch tuesday" updates). It works out well. We also have all upgrade events logged to Nagios, so we can see a history of when upgrades were last performed on any server.
When you specify this is a "production" server, does that mean there are development and test servers as well? If so, the patches should be tested on those systems before being installed on the production box.
I wouldn't do it. Bad patches do happen and I wouldn't want a system failing in the middle of the night or while I was otherwise unavailable. They should be pushed in a maintenance window when an administrator is available to monitor the update.
If the alternative is irregularly applying updates, you don't actively follow security updates, and you are running a vanilla stable Lenny, then auto-updating probably increase the security of your machine, since you will update known security holes faster.
Ubuntu Server has a package that will allow it to auto update security updates. It allows you to blacklist certain apps as well. It also talks about apticron which will email you when there are updates available for your server.
You can find out more about it at the following pages depending on which version of Ubuntu Server you're running.
It may be safe—or, more accurately, the level of risk may be within your range of comfort. The level of acceptable risk will depend on several factors.
Do you have a good backup system that will allow you to quickly revert if something breaks?
Are you forwarding server logs off to a remote system so that if the box goes belly up you will still know what happened?
Are you willing to accept the possibility that something may break and you may have to do a quick restore/revert on the system if something fails?
Have you manually compiled anything on your own, or did absolutely everything installed on your system come from the official repositories? If you installed something locally, there is a chance that an upstream change may break your local maintained/installed software.
What is the role of this system? Is it something that would barely be missed if it died (e.g. a secondary DNS server) or is it the core piece of your infrastructure (e.g. LDAP server or primary file server).
Do you want to set this up because nobody responsible for the server has the time to maintain the security patches? The potential risk of being compromised by a un-patched vulnerability may be higher then the potential for a bad update.
If you really do think you want to do this, I suggest you use one of the tools that already are out there for this purpose like
cron-apt
. They have some logic to be safer then just a blindapt-get -y update
.Yes, as long as you are talking about update and not upgrade. Apt will even do it for you if you put the line:
in a file under
/etc/apt/apt.conf.d/
It is generally safe, but I wouldn't recommend it for a simple reason:
In production environment, you need to know exactly what's on it, or what's supposed to be on it, and be able to reproduce that state with ease.
Any changes should be done via Change Management process, where the company is fully aware of what they are getting into, so they can later analyze what went wrong and so forth.
Nightly updates makes this kind of analysis impossible, or harder to do.
I might do that on stable, or on Ubuntu, but not on an unstable branch, or even the testing branch.
Though, when I put my sysadmin hat on, I believe that I should be manually applying all updates, so that I can maintain consistency between servers -- and also so that, if one day a service breaks, I know when I last updated that service. That's something I might not check if updates were proceeding automatically.
We use stable and schedule apt-get upgrade for Tuesday evening on most of our Debian systems (coincides with our Microsoft "patch tuesday" updates). It works out well. We also have all upgrade events logged to Nagios, so we can see a history of when upgrades were last performed on any server.
When you specify this is a "production" server, does that mean there are development and test servers as well? If so, the patches should be tested on those systems before being installed on the production box.
I wouldn't do it. Bad patches do happen and I wouldn't want a system failing in the middle of the night or while I was otherwise unavailable. They should be pushed in a maintenance window when an administrator is available to monitor the update.
I remember doing that in a previous job; I ended up with problems on the production server because an update rewrote a config file automatically.
Therefore, I would advise you to supervise updates.
If the alternative is irregularly applying updates, you don't actively follow security updates, and you are running a vanilla stable Lenny, then auto-updating probably increase the security of your machine, since you will update known security holes faster.
Ubuntu Server has a package that will allow it to auto update security updates. It allows you to blacklist certain apps as well. It also talks about apticron which will email you when there are updates available for your server.
You can find out more about it at the following pages depending on which version of Ubuntu Server you're running.
EDIT: Assuming you're running Ubuntu. Although I would bet the same packages and solution is available on Debian.
Take a look at cron-apt. It only downloads the lists and package files by default, but you can tune it to send mails, or even upgrade the system.