I have multiple servers in WEB cluster (identical configuration for all of them, despite the IP)
How do you deploy changes in configs on multile servers?
I make the new config, then create config per every server (placing correct IP), and next:
- upload them on every server, replacing old ones (rsync over ssh)
- set on every server a job which reloads webserver at the same time (servers use ntp). - this done by issuing commands by script (to save time for logging in)
- before adding a job for server reload - there's checksum test of the config on the server) - an a notification in case of fail
How do you see such method? What should be the "professional way :) ? (I don't say my way doesn't work... it works and saves my time not used for logging on every webserver.)
Regards,
You can use any of the modern change automation tools (Puppet, Chef, cfengine, bcfg2, and so forth) for this. Any of them can deploy files, and restart services when files they manage are modified.
I've had great success with Puppet over the last few years in several environments.
Once you start using the tool for everything, it has the added benefit of documenting both your process and infrastructure.
Back it by a versioning tool such as git or svn and now you have .. a versioned infrastructure.
I generally agree with bdha's answer - use a config management tool to manage you changes. Another point I want to make is that you should strive to use your system's package management tool as much as possible for everything that isn't a configuration file. It is much easier to manage a system that has a collection of packages installed than a system with a bunch of manual file edits (or a system with a bunch of automated file edits via puppet).
If you have configuration files that never change, those are also candidates for inclusion in system packages. Learn how to build packages in your system's package tool, and how to stage them in a centralized repository so you can then use tools like yum to manage and install them.
Also consider carefully your software push system. A lot of people use puppet or cfengine to do this, but again there are some more specialized tools that may scale better as your environment gets larger. Example of these types tools include Capistrano and Pogo.
If you have a big number of servers you should definitely look at puppet or chef, they're the best solution that'll take care of all your requirements and even reload the server config as soon as the new one is acknowledged.
If you find that a bit overkill you could just make a scripts with some cross ssh keys from a central location to push the config, if I were you I would use mercurial or bazaar on that central repo to track changes and be able to roll back easily in case things go bad.
Actually if you have a large number of servers, Cfengine is definitely the way to go. It runs every 5 minutes (as opposed to every hour!) I've heard over and over again from others that Puppet doesn't scale very well. You run into a trade-off between managing a large number of machines and understanding their state accurately. This is unfortunately due to its architecture, so it's hard to get around. I haven't really played around with Chef, so I'm not familiar with its full potential.