I have database servers, web servers, SVN servers, etc. Often times I ssh among them...or they auto-ssh.
How do I manage what server gets to log into which others?
I have database servers, web servers, SVN servers, etc. Often times I ssh among them...or they auto-ssh.
How do I manage what server gets to log into which others?
I use Puppet, and have a class defined for every key, then classes that include those classes to define the "groups" of keys I have (for us it's people -- so L1 techs, L2 techs, managers, developers -- but you can do db servers, fileservers, svn servers, etc). Then the various types of machines have their own manifests that define which of those groups have access to that type of machine, so development boxes have L1, L2, and developers, prod servers have L1 and L2, sensitive servers just have L2, that sort of thing. Adding a new machine is just a matter of deciding which classes it belongs to and adding a few lines here and there, which we have documented in our new machine commissioning procedures.
We are using OpenLDAP for storing users':
It's stable, easy to expand, has a lot of documentation and integration features, so I recommend it.
Kerberos is a secure, privilege-oriented, ticket-based system that understands the concepts of users, realms, roles, machines, etc. You first authenticate with a ticket-granting-ticket, which then obtains service tickets for you when you request a particular privilege/login on a machine. You can control which principals can log into which accounts with centrally-managed ~/.k5login files, perhaps via Puppet/cfengine. The configuration base machines can have their own principals / keys etc to be able to update the other systems automatically. You would also need to secure the KDC physically and lock it down to maybe one or two critical users, as this is where you revoke principals, etc.
Your implementation will vary wildly I'm sure, but the above is a very broad-strokes overview of the setup we used to run.
It is a complex system, and may require a drastic change on your end as far as implementation and security procedures go. In my experience however, once the system is in place and configured correctly it becomes transparent and is much less of a burden on the various teams involved.
You can read more about it here:
MIT Kerberos User's Guide
This can simplify your task a bit: use
ssh-copy-id [-i [identity_file]] [user@]master-host
to connect from each of these servers to one of them. Useclusterssh
to give this command to all servers at once. Then, copy theauthorized_keys
keys file from master-host to all other hosts: e.g. withclusterssh
andscp
. That'll do the trick simple, fast, nice and ... once :)You might be interested in the ssh-agent concept where you may let a ssh-client connection ask "back" though the ssh-connection(s) used to get to the current location about credentials.
It is very handy.