In our environment we have a variety of scripts that we use on 30+ servers. Currently, we copy the scripts onto each server when the OS is installed. However, this has the problem that changes to scripts require a manual redeployment.
I'm considering setting up a NFS export, but this has some drawbacks:
- I'm under the impression that NFS exports consume network resources even when not in use.
- When I mount the /scripts directory off NFS, it will hide any local scripts.
- Permissions. These machines all have local (file based) users and groups.
- If NFS goes down the scripts are gone.
Other options I have considered are Subversion (or any source control), rsync, and rpms. The benefit to svn is the version control off the scripts. Rsync's simple and allows local scripts. I don't think rpm would work because of our Solaris servers.
We have Solaris, Redhat Enterprise Linux, and Suse Linux servers, and we only have a small number (~10) of small scripts to deploy, so the simpler the better.
Consider: Puppet, Bcfg2, Cfengine.
We use subversion to develop our local scripts and then deploy them using RPM - a simple build script (also maintained as part of our script package) pulls an SVN export and runs the RPM spec file it finds there. The result is then deployed to our local yum repository from which all the machines can pull it, either using automatic updates or manually depending on the type of machine (development, staging, production, etc).
In my current company we only use RHEL/CentOS servers so a single yum repo is all we need, but in a previous company I've built a similar setup that outputted RPMs for both RHEL (yum repo) and Mandriva (urpmi repo), self extracting tgzs for Solaris and even executable installers for Windows (NSIS can be used to build installer packages on the same Linux server where all your builds are being made).
Once you have a self extracting tgz, deploying it automatically to your Solaris machines is a simple matter of an SSH call.
If you have such a small setup, then Subversion might be a bit much. At the same time, it also provides change management, one of the pillars of good systems administration.
Subversion sounds like a great idea, and is how we handle similar deployments, although we don't use quite as many machines. If your machines are spread out throughout multiple locations, you may want to consider using one of the distributed VCS systems (git etc.), that way you can have a cron job on each machine pulling from a local location, and have a 'central' location in each data center that also updates at regular intervals.
If you do like the idea of using rpm's, solaris has its own packaging system (sysv packages) and you could just generate a solaris package along with the rpm. I assume this process will be automated, so it won't be too much trouble to do so.
The biggest drawback to NFS that you haven't mentioned is the reliance for your entire architecture on a single fileserver. If that goes down, you lose all your scripts. Depending on the function of the scripts, this may or may not be acceptable to you.
We have a similar scenario, around 30 servers with 15-25 scripts to distribute, among other packages. We have had very good results with SVN in out labs, yum for distribution and RPM for package maintenance on the actual servers.
The tricky part is to automate the RPM building, as it will impose a time-tax. Each time you find a bug you need to work it out locally, and then make a new release on RPM in order to install it, otherwise you loose all the magic.
While Puppet or Bcfg2 seems to be good choices, we discarded them as they seem to add complexity to the system. Keep it simple.