I'm considering adding a cronjob running yum -qy update
on a regular basis on some machines that do not get regular maintenance. The goal would be to keep the machines up-to-date with regard to security patches that would otherwise be applied too late. I'm only using the CentOS base repositories.
Questions:
- In your experience - how "safe" would this approach be? Should I expect failed updates once in a while? Roughly how often would this approach require reboots?
- Pros/cons or other gotchas with this approach?
- How are you keeping your machines up-to-date using automation?
It Depends
In my experience with CentOS its pretty safe since you're only using the CentOS base repositories.
Should you expect failed updates once in a while... yes... on the same level that you should expect a failed hard drive or a failed CPU once in a while. You can never have too many backups. :-)
The nice thing about automated updates is you get patched (and therefore more secure) faster than doing it manually.
Manual patches always seem to get pushed off or regarded as "low priority" to so many other things so if you're going to go the manual mode SCHEDULE TIME ON YOUR CALENDAR to do it.
I've configured many machines to do auto yum udpates (via cron job) and have rarely had an issue. In fact, I don't recall ever having an issue with the BASE repositories. Every problem I can think of (off the top of my head, in my experience) has always been a 3rd party situation.
That being said... I do have several machines that I MANUALLY do the updates for. Things like database servers and other EXTREMELY critical systems I like to have a "hands on" approach.
The way I personally figured it out was like this... I think of the "what if" scenario and then try to think of how long it would take to either rebuild or restore from a backup and what (if anything) would be lost.
In the case of multiple web servers... or servers who's content doesn't change much... I go ahead and do auto-update because the amount of time to rebuild/restore is minimal.
In the case of critical database servers, etc... I schedule time once a week to look them over and manually patch them... because the time taken to rebuild/restore is more time consuming.
Depending on what servers YOU have in your network and how your backup/recovery plan is implemented your decisions may be different.
Hope this helps.
Pro: Your server's always at the latest patch level, usually even against 0-day exploits.
Con: Any code running on your server that uses features removed in later versions, any configuration files that change syntax, and any new security "features" that prevent execution of code that can be exploited can cause things running on that server to break without you knowing about it until someone calls you with a problem.
Best practice: Have the server send you an email when it needs to be updated. Back up or know how to roll-back updates.
On top of what most people said here, I'd highly recommend to sign up to the centos mailing list, they always post emails about patches and their priorities right before they push them to the repositories. It's useful to know in advance what's packages need to be upgraded.
My setup is allowing yum to automatically update the system once a day, I make yum send me a mail with the packages installed or upgraded right after. I also receive mail when yum has a conflict and need manual intervention (every 4 hours).
Until now, everything is running smoothly (for over 4 years now), the only time I got caught offguard was when yum upgraded the regular kernel (I virtualized my server) and changed the grub and pushed the regular kernel as the default, 2 weeks later during maintenance my system got rebooted and all of my virtual servers were gone for a few minutes until I had to manually intervene.
Other than that, I didn't really had any problems.
as long as you dont have any custom packages, and are using only the Base repositories from CentOS, it should be fairly safe.
also, a better way to achieve this would be to use yum-updatesd with
do_update = yes
set.I suppose as long as you have automated backups it wouldn't be too much of a worry, as long as you can live with downtime of the server.
I haven't tried this; I wouldn't want to personally because there is a significant risk of breaking something or having an unusual obscure issue introduced because of an upstream fix. It's even worse if this is a server that rarely gets attention so if something is going wrong you may not know about it.
If you can live with the server in question going down for a period of time if/when something breaks, and you have a response plan to restore the system back to the previous state as well as a system to send you updates via logs or email reporting when and what was updated (so you know it isn't in a stuck state or waiting for a reply to something that requires intervention) then you can try it out. If it's a critical server or something important...I'd not want to risk it.
My servers aren't yours though :-)