This is probably a simple question for those of you already running configuration management tools. Are configuration management tools such as Puppet or Chef the right approach for keeping installed packages up to date?
Suppose I run a number of servers, mostly based on Debian and Ubuntu. Do configuration management tools make it easier to update packages installed from the repositories when security updates or bug fixes come along?
I currently run "unattended upgrades" to let the systems automatically install security updates, but I still have to connect to the servers and run aptitude update && aptitude safe-upgrade
every so often. Naturally this gets boring, tedious and error-prone the more servers there are.
Are tools such as Puppet or Chef the right approach to keeping installed packages up to date? Do any of you use these tools to avoid manually running aptitude
or an equivalent on 15 servers? I am quite certain the answer to these questions is "Yes, of course!"
But where can I find more information about this particular use case? I have not yet had the time to study Puppet or Chef in-depth, and the example cookbooks or classes only show more or less trivial examples of installing one particular package, such as ssh. Do you have any resources to recommend, other than the official documentation (I am, of course, going to study the docs once I know which, if any, of the tools are right for me).
You can do it with puppet, you either do:
or
to specify the latest/required version. i.e.
This does at least mean you can specify the same version across all systems, as well as preventing servers from (potentially dangerously) automatically upgrading themselves. I've used this method in production on a number of sites, and it works very well.
Running unattended upgrades scares me a bit, especially if they're upgrading mission-critical packages, kernels, mysql libraries, apache, etc. Especially if the install script might want to restart the service!
I think this is probably the wrong question. Certainly using configuration management tools like Puppet and Chef to maintain your infrastructure is a huge leap forward from trying to do it all manually. The issue of keeping your package versions up to date and in sync is not one that any of these tools solves directly. To automate this properly you need to bring the package repositories themselves under your control.
The way I do this is to maintain a dedicated Yum repo (for Redhat/Fedora/CentOS; an APT repository for Debian/Ubuntu) which contains the packages I care about for a particular site. These will generally be the dependencies of the application itself (Ruby, PHP, Apache, Nginx, libraries and so on) and security-critical packages.
Once you have this set up (usually you can just mirror the required packages from the upstream repo to start with) you can use Puppet's "ensure => latest" syntax to make sure that all your machines will be up to date with the repo.
It would be wise to use a 'staging' repo to enable you to test updated versions of packages before rolling them blithely out to production. This is easily done with Puppet without any duplication of code by using repository templates.
Automating your package versioning strongly encourages you to bring all of your production systems into sync, as maintaining multiple repos and packages for different OS distros, versions and machine architectures is very time consuming and likely to lead to all sorts of obscure problems and incompatibilities.
All of this advice applies equally to Ruby gems, Python eggs and other package systems which you may use.
I've written a little Puppet tutorial which should help you get up and running with Puppet quickly. You could deploy a custom repo definition to your machines using Puppet as the first step in bringing package versions under control.
Puppet (I'm pretty sure chef does also) ties in with your apt-get/yum software repositories. Since they do the heavy lifting of figuring out which packages are available, that means
ensure => latest
just works for Ubuntu/CentOS/Debian the like. As long as you set up the appropriate files correctly (/etc/apt/sources.list
, etc).Whilst Puppet/Chef are possible contenders for this functionality, to make them keep everything on the system up-to-date requires either custom types or listing every package (including underlying system libraries like libc6) as resources with
ensure => latest
. For the specific case of automated package updates, you might want to look into thecron-apt
package, which does what you want as well.This question is old, but I thought i'd answer in an up-to-date way as a currently existing answer was unavailable back then.
If you are using puppet or chef, look into mcollective. It is a very nice tool by the puppetlabs guys that allows you to send commands to groups of servers. http://docs.puppetlabs.com/mcollective/
It also has an apt plugin, which can be used to do an apt update on any number of servers: http://projects.puppetlabs.com/projects/mcollective-plugins/wiki/AgentApt
I realize this is a bit late for your original question, but here it is in the spirit of "better late than never".
I use Cfengine 3 to do this on several servers. I specify an explicit list of packages for automatic update, thus avoiding updating all packages without being a little careful about it. It works great, and cfengine 3 is very lightweight.
Here's a promise snippet from my cfengine configuration:
Hope this helps.
I agree with Jonathan. The Cfengine 3 approach is nice because you can control all aspects of package management without having to recode at a low level.
We use puppet + apt-dater.
You can also use package management tools such as Canonicals Landscape which is designed to manage and monitor Ubuntu / Debian systems. It manages multiple systems, allows you to update them simultaneously and provides some basic monitoring capabilities.
Security updates
Generally I think it's simplest to use Ansible or similar to set up the robust unattended-upgrades package for Ubuntu/Debian (or
yum-cron
for RHEL/CentOS). You can use Puppet, Chef or other tools, but I will discuss Ansible here.unattended-upgrades
can be used to make non-security updates at the same time if you prefer, which is much easier than running a command via Ansible every day.unattended-upgrades
takes care of auto updates every day, and is normally constrained to security updates only (to increase stability). If the server needs a reboot after the update, this tool can auto-reboot at a certain time.If your reboots are more complex,
unattended upgrades
can email you, and it also creates/var/run/reboot-required
, so that Ansible (or similar) can manage the reboots at a suitable time (e.g. rolling reboots of a cluster of web or DB servers to avoid downtime, waiting for each server to become available on a certain TCP port before continuing).You can use Ansible roles such as jnv.unattended-upgrades for Ubuntu/Debian systems, or the simple but effective geerlingguy.security, which also covers RHEL/CentOS (and hardens SSH config).
If rapid security updates are less important, you could put them through a test process on less important servers first, and run the
unattended-upgrade
command once tests show there are no problems - however it's quite rare for server-oriented security fixes to cause problems, in my experience.General updates
Updates other than security should go through a normal continuous integration and testing process, to ensure things don't break.
I have seen
aptitude safe-upgrade
cause serious problems on servers in the past, so it's best not to run this automatically, whereas security updates are generally quite safe.