As the title says, I have an SELinux instance on EC2 that I haven't used for a while. I have been unable to access it via ssh since firing it back up. I have accessed it from this machine in the past, and the security group stuff on AWS is setup to allow ssh access.
I can't login to the machine at all, so all my fixes have been limited to what I can do by mounting the root volume to a new EC2 instance and doing what I can that way.
Here's what I've tried so far:
1) Tried copying my ssh key into the ssh user's authorized_keys file. Password prompt is still presented when I try to SSH.
2) Verified that sshd_config has PasswordAuthentication no
3) Used chroot to set the mounted volume as root and then reset the password of the user I'm trying to login as. There's no error from this, but the new password doesnt work at the prompt over ssh.
4) There's no password on the ssh key I'm using, so that's not what I'm being prompted for.
5) Again using chroot, I tried using the 'restorecon' command on the .ssh folder for the user in question
6) Ensured the user's .ssh directory has permissions set to 700 and the .ssh/authorized_keys file has them set to 600
So far, still no joy. What else can I try?
One gotcha with sshd will not let someone in via authorized_keys unless both authorized_keys and the .ssh directory have correct permissions:
(Replace username by the user name of the SSH user)
Also, ssh login errors are logged in /var/log/secure (possibly a different file if you're using something besides a RedHat clone), so looking in the file for ssh login errors will help troubleshoot and resolve this issue.
Turns out the SELinux attributes were bad. I had to mount the drive to a different instance, change the enforcing=0 in the
/etc/selinux/config
and then remount and boot w/o SELinux, then I had to relabel the whole file system (at least, I decided to after relabeling a bunch manually and finding still more) by doingtouch /.autorelabel
and then another reboot.