Here's my situation: I'm setting up a test harness that will, from a central client, launch a number of virtual machine instances and then execute commands on them via ssh
. The virtual machines will have previously unused hostnames and IP addresses, so they won't be in the ~/.ssh/known_hosts
file on the central client.
The problem I'm having is that the first ssh
command run against a new virtual instance always comes up with an interactive prompt:
The authenticity of host '[hostname] ([IP address])' can't be established.
RSA key fingerprint is [key fingerprint].
Are you sure you want to continue connecting (yes/no)?
Is there a way that I can bypass this and get the new host to be already known to the client machine, maybe by using a public key that's already baked into the virtual machine image ? I'd really like to avoid having to use Expect or whatever to answer the interactive prompt if I can.
IMO, the best way to do this is the following:
That will make sure there are no duplicate entries, that you are covered for both the hostname and IP address, and will also hash the output, an extra security measure.
Set the
StrictHostKeyChecking
option tono
, either in the config file or via-o
:ssh -o StrictHostKeyChecking=no [email protected]
For the lazy ones:
-H hashes the hostname / IP address
As mentioned, using key-scan would be the right & unobtrusive way to do it.
The above will do the trick to add a host, ONLY if it has not yet been added. It is also not concurrency safe; you must not execute the snippet on the same origin machine more than once at the same time, as the tmp_hosts file can get clobbered, ultimately leading to the known_hosts file becoming bloated...
You could use
ssh-keyscan
command to grab the public key and append that to yourknown_hosts
file.This is how you can incorporate ssh-keyscan into your play:
this would be a complete solution, accepting host key for the first time only
To do this properly, what you really want to do is collect the host public keys of the VMs as you create them and drop them into a file in
known_hosts
format. You can then use the-o GlobalKnownHostsFile=...
, pointing to that file, to ensure that you're connecting to the host you believe you should be connecting to. How you do this depends on how you're setting up the virtual machines, however, but reading it off the virtual filesystem, if possible, or even getting the host to print the contents of/etc/ssh/ssh_host_rsa_key.pub
during configuration may do the trick.That said, this may not be worthwhile, depending on what sort of environment you're working in and who your anticipated adversaries are. Doing a simple "store on first connect" (via a scan or simply during the first "real" connection) as described in several other answers above may be considerably easier and still provide some modicum of security. However, if you do this I strongly suggest you change the user known hosts file (
-o UserKnownHostsFile=...
) to a file specific for this particular test installation; this will avoid polluting your personal known hosts file with test information and make it easy to clean up the now useless public keys when you delete your VMs.I do a one-liner script, a bit long but useful to make this task for hosts with multiples IPs, using
dig
andbash
The following avoid duplicate entries in ~/.ssh/known_hosts: