I manage around 30 Ubuntu servers using puppet. I've seen many references to cron-apt and apticron as approaches to keeping their packages up to date but I haven't been able to find a way to centrally manage the process. With cront-apt/apticron I would still need to log in to each host and run aptitude update
to perform the update. Not to mention review notifications from all 30 machines whenever a core package is updated.
There has to be a better way. Any suggestions?
A co-worker discovered and has looked briefly into apt-dater, which is a "terminal-based remote package update manager".
You use a curses-based interface to manage updates on all of your hosts, or groups of hosts, etc. Supports logging of the full apt session, including any errors that may be encountered, etc.
Relies on ssh and sudo on the managed machines.
see https://github.com/DE-IBH/apt-dater
Haven't used it myself, so I can't endorse it, but it sounds close to what you're looking for.
Since you're already using Puppet, the easiest way to do this (and the best for change control/tracking purposes) is to specify the desired version of packages you want installed in the puppet manifest. You keep an eye on the security announcements list, and when something you use comes through you just update Puppet to say "install this new version of this package". Assuming you're using revision control on your manifests, you then know when the "policy" was changed, and the reports from Puppet show you exactly when the change was actually made (so you can correlate that easily against any later log events).
Landscape might be of interest to you. This is the "official" management tool for managing large Ubuntu deployments, and Canonical is probably very keen to get your dollars for its use.
RE-EDIT:
First, a disclaimer; I haven't used mirroring for Debian or Ubuntu, so I am not familiar with the software.
Second, it appears that apt-mirror would be "too heavy" a solution, my apologies. The original idea was that you would have a separate test machine (or test environment, probably a virtual machine?) to deploy the update on. Once you are satisfied with the performance of the update, you would pull/put the package into your "deploy" mirror (there would be the local mirror from the official sources, and a secondary mirror for just updates that you wish to deploy). The remote machines would then run an update at a pre-set time and pull it from your "deploy" mirror onto each machine, a cron job consisting of:
Unfortunately, as I began to read through the details, it seems that
apt-mirror
will pull all kinds of stuff and not just the packages you are after. So, I'm going to abandon this idea, although the concept has some merit.Have a look at clusterssh (apt-get install clusterssh):
$ cssh server1 server2 server3 ...
Without having really thought about it previously, my first idea would be something similar to what avery has suggested, especially if you already have a test envrionment.
Basically, you set your production machines to automatically upgrade from your own local repo, and you only update this repo after you've upgraded your test environment to the newest version of whatever you run.
Apticron doesn't scale well, it's designed to be ran in pretty small environments, but it does have some good points:
Sometimes ago I wrote a low level/dirty automation Fabric script (fabfile) to meet similar requirements, you can check it at:
https://gist.github.com/lgaggini/2be3c5bb47b8ce9267bd
shell script to do it? where you set also individual packages per node?
this can be customised to individual endpoints or make it full automatically running.
this way i use it to update some cloud servers web & db servers. some get special packages other dont and it scales.
its like @lgaggini posted