I have a few lab environments where the computers get rebuilt on a periodic basis but need to keep the same ssh host keys so that the people who connect to the lab computers (often from their own systems not under my administration) don't get "host key changed" errors every time we upgrade the the lab systems' OSes.
(This is similar to the question Smoothest workflow to handle SSH host verification errors? but in this case we've already decided that maintaining the same keys across system rebuilds is the best solution for us.)
Right now, I have a Puppet module named ssh
that has the following clause in it:
file { "/etc/ssh/":
owner => 'root',
group => 'root',
mode => '0644',
source => "puppet:///modules/ssh/$fqdn",
recurse => true,
require => Package['openssh-server'],
notify => Service['sshd'],
}
On the puppet master, each host has its own directory that contains all of the host key files (ssh_host_key
, ssh_host_dsa_key
, ssh_host_rsa_key
, etc.), as implied by the file
resource definition. My problem is that this feels like mixing code and data (since all of the host keys live inside a module directory) and I have to make another commit to the Puppet VCS every time I add a new set of hosts. Is there a better way to manage these things?
You can adjust your
fileserver.conf
and basically share out another directory you use just for ssh config if you want to separate your puppet files from the SSH config/keys.With a config like above your manifest could look like below, and files would be retrieved from
/srv/puppet/sshconfig/$fqdn/
.My suggestion is for you to create a
module
or ascript
where you upload you know host to aexternal
server, then you give ansort -u
in this file, excluding any duplicated entry, then after that you upload this file to all your machines, of course taking care with the new entries in thisssh_known_hosts
too.