We have many servers and still want to update them all. The actual way is that any of the sysadmins go from server to server and make a aptitude update && aptitude upgrade
- it's still not cool.
I am searching now for a solution which is still better and very smart. Can puppet do this job? How do you do it?
You can use the
exec
type such as:To be honest, I did not try it myself, but I think you just need to create a new module that include such an exec definition.
The
apt-get upgrade
command is interactive. To make it run quietly, you can add the option-q=2
as shown above.if all your hosts are debian, you can try the unattended-upgrades package.
http://packages.debian.org/sid/unattended-upgrades
Here we have been using puppet to manage our debian virtual machines, with puppet we are able to enable and manage unnatended-upgrade configs on all servers.
Recently our team are testing the mcollective tool to run commands on all servers, but to use mcollective ruby skills are needed.
[s] Guto
I would recommend going for Puppet, facter and mCollective.
mCollective is a very nice framework where you can run commands over a series of hosts (in parallels) using facter as filter.
Add to that a local proxy / cache and you'd be well set for servers management.
Use a tool that is made to run a single command on multiple servers. And by that I do not mean having a kazillion terminals open with Terminator or ClusterSSH, but instead having a single terminal to a management server running a tool suitable for the job.
I would recommend func, Salt or mCollective in this context. If you already have Puppet, go for mCollective (it integrates nicely in Puppet). If you don't, and you have an old Python on your machines, you might enjoy func. If you Python in new, try Salt. All these tools run the command specified at the command line asynchronously, which is a lot more fun than a sequential ssh loop or even doing the same aptitude commands in umpteen Terminator windows to umpteen servers.
You'll definitely love Salt.
So I guess there are many things which contribute to a good solution:
Bandwidth: Basically two alternatives to save bandwidth come into my mind:
Administration: I would configure a parallel shell like PDSH,PSSH,GNU Parallel and issue the command on all clients, if I tested the command previously on an example machine. Then its not very likely that it may fail on all the others. Alternatively you may consider a cron job on all clients, but then it may fail automatically, so I would prefer the first solution.
If you concern about simultaneity of upgrades you could schedule your commands with
at
Logging: As with parallel shells you have the possibility to redirect output I would combine stderr and stdout and write it to a logfile.
My own parallel ssh wrapper: classh is an alternative to the various Parallel and cluster ssh tools out there.
You might like it better or you might hate it. There are only three reasons I'm mentioning it here:
subprocess.communicate()
method --- so you can only get capture about 64K of stdout and, separately up to 64K of stderr, for example; also any remote process which attempts to read from its stdin will simply stall until the local ssh subporcess is killed, automatically by classh's timeout handling)It's extremely simple to write a custom script, in Python, to use classh.py as a module. So it's very easy to write something like:
That's all there is to it. For example in the nested completed loop you can gather a list of all those which returned some particular exit status or to scan for specific error messages, and set up follow-up jobs to handle those. (The jobs will be run concurrently, default of 100 jobs at any time, until each is completed; so a simple command on a few hundred hosts usually completes in a few seconds and a very complex shell script in a single long command string ... say fifty lines or so ... can complete over a few thousand hosts in about 10 minutes ... about 10K hosts per hour in my environment, with many of those located intercontinentally).
So this might be something you can use as an ad hoc measure until you have your puppet configuration implemented and well testing ... and it's also quite handing for performing little ad hoc surveys of your hosts to see which ones are deviating from your standards in various little ways.
The answer using exec is pretty helpful.
However according to the apt-get manual it's not a good idea to use -q=2 this way (though I have used it for years without problems)
I have used a script myself for years, running apt-get the following way:
Things like puppet and other tools people mentioned sure may work, but it seems like it's overkill for what basically is just mimicking a few commands typed by a human. I believe in using the simplest tool for a specific job, in this case a bash script is about as simple as it gets without losing functionality.
For years I've been happily upgrading and installing packages using apt-dater. It is lightweight and effective tool for remote package management. It uses
screen
,sudo
andssh
.For package management apt-dater may be easier solution than configuration management tools.
apt-dater is handy for centralised package management on different GNU/Linux flavours such as Debian and CentOS.
you can use Fabric. Fabric is a Python (2.5-2.7) library and command-line tool for streamlining the use of SSH for application deployment or systems administration tasks.
use webmin,,,and use its webmin cluster feature, in which you can add all systems to one webmin console and issue them any command or control all of them from one place.
Or
Use Cluster ssh
Or
PSSH