I'm running ubuntu on an EBS-backed EC2 instance.
In order to change the security group of my instance, I followed the instructions here for moving the ebs volumes to a new instance. Then I reassigned my elastic ip to the new instance.
Now ssh complains that the rsa key has changed, but I don't seen any mention of RSA key generation in the console log. Why does it do this? How can I get the "new" host RSA fingerprint or restore the "old" one?
Update: The procedure I detailed below is much more involved than necessary. The easiest way to manage ssh keys on a ubuntu ec2 server is to specify them at instance launch with user data.
Here's how I was able to get the new server RSA fingerprint:
- Run new EBS-backed instance, record new temporary RSA fingerprint from console log.
- Stop the new instance
- Detach EBS vol from new instance
- Attach old vol to
/dev/sda1
on new instance - Start the new instance with old volume attached.
This is when, as Michael Lowman points out, the
ssh_host_rsa_key
was (silently) regenerated. If I had skipped straight to step 7, I should have seen the host_rsa_key from the old instance. - Stop the new instance
- Detach the old volume from
/dev/sda1
and re-attach to/dev/sdb
- Re-attach the new instance' original EBS boot volume to
/dev/sda1
- Start the new instance, connect via SSH (RSA fingerprint should match the temporary one noted in step 1)
- Copy the new
ssh_host_rsa_key.pub
from the old EBS volume (now mounted on/dev/sdb
) into my localknown_hosts
file. - Stop the new instance, detach the new volume from
/dev/sda1
and delete it. - Detach and re-attach the old volume to
/dev/sda1
. - Bring up the new instance
- ssh doesn't complain about the host RSA fingerprint
The question still remains: why did it change?
The host key is generated on first boot of any instance. Init scripts are run at every boot that access the machine instance data. The initscript saves the instance id in a particular file: this way, if the file is absent or contains a different ID, the system initialization stuff is run.
That includes generating the host key (stored at
/etc/ssh/ssh_host_{rsa,dsa}_key
), downloading the user public key from the metadata and storing it in theauthorized_keys
file, setting the hostname, and performing any other system-specific initialization.Since the determining factor is not the hard disk, but the (unique to each instance) instance ID, these things will always be done when you boot EBS volume attached to a new instance.
Edit:
I looked deeper into Ubuntu specifically and installed an ubuntu ami (3ffb3f56). I'm not a big ubuntu guy (usually prefer debian) so this was getting a little deeper into the ubuntu upstart-based init sequence than I usually go. It seems what you're looking at are
/etc/init/cloud*.conf
. These run/usr/bin/cloud-init
and friends, which have lines likeAll the code's in python, so it's pretty readable. The base is provided by the package
cloud-init
and the backend for the scripts is provided bycloud-tools
. You could look and see how it determines "once-per-instance" and trick it that way, or work around your problem with some other solution. Best of luck!(As far as I know,) EC2 images are initially accessible via the key-pair that you associate with them, regardless of the keys setup on the machine. Consider the scenario where you launch a public AMI - you don't have the private/public keys to access it - you generate a key-pair, associate it, and use the private key from the key-pair. Moreover, if you have an instance where you have lost access, reloading it on another instance will typically let you access it by setting a new key-pair.
It would stand to reason therefore, that at least one key (root) is set based on the key-pair at the time the image is launched.
(A side note - 'fingerprint' usually means the server signature - this varies on a per 'virtual' machine basis, regardless of other factors and is present to provide some assurance that you are connecting to the server you believe you are connecting to)