nvm requires that a user logs out / back in after installation for the changes to take effect. How can I allow for this in an ansible task running via vagrant. here's what I tried:
- name : Install nvm
shell: curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.30.1/install.sh | bash
failed_when: False
register: nvm_installed
- name: Kill open ssh sessions - ansible should log back in on next task
shell: "ps -ef | grep sshd | grep `whoami` | awk '{print \"kill -9\", $2}' | sh"
when: nvm_installed | changed
failed_when: false
- name : Install Node.js v 4.2.x
command : nvm install v4.2
But I get the error:
fatal: [default] => SSH Error: ssh_exchange_identification: Connection closed by remote host
while connecting to 127.0.0.1:2222
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
TASK: [check if rpmforge installed] *******************************************
FATAL: no hosts matched or all hosts have already failed -- aborting
the command vagrant ssh
also now fails with the error:
ssh_exchange_identification: Connection closed by remote host
I based this on the answer given here - https://stackoverflow.com/questions/26677064/create-and-use-group-without-restart
I think maybe the kill command is killing the sshd daemon itself?
ps -ef | grep sshd | grep `whoami`
root 2621 1247 0 11:30 ? 00:00:00 sshd: vagrant [priv]
vagrant 2625 2621 0 11:30 ? 00:00:00 sshd: vagrant@notty
root 3232 1247 4 11:34 ? 00:00:00 sshd: vagrant [priv]
vagrant 3235 3232 0 11:34 ? 00:00:00 sshd: vagrant@pts/0
vagrant 3252 3236 0 11:34 pts/0 00:00:00 grep sshd
UPDATE
I also tried the following:
- name : Install nvm
shell: "curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.30.1/install.sh | bash"
register: nvm_installed
failed_when: False
- name : source bash profiles
shell : source /home/vagrant/.bashrc
when: nvm_installed
register: sourced
- name : Install Node.js v 4.2.x
command : nvm install v4.2
when: sourced
but get the following error:
TASK: [Install Node.js v 4.2.x] ***********************************************
failed: [default] => {"cmd": "nvm install v4.2", "failed": true, "rc": 2}
msg: [Errno 2] No such file or directory
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit @/Users/lukemackenzie/playbook.retry
default : ok=10 changed=3 unreachable=0 failed=1
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
If I run the Install nvm step manually on the managed machine it says that the following has been appended to .bashrc
:
export NVM_DIR="/home/vagrant/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # This loads nvm
As already pointed out in the comments sourcing
.profile
should be sufficient for installingnvm
.Just replace the sshd restart task with this task:
You might also want to take a look at this vagrantfile.
If you need to to restart ssh server (for whatever other reason) you can try this approach as documented in this Ansible blog post:
I created a Galaxy role which will work with Ansible 2: ssh-reconnect
Usage:
Here's what eventually worked:
It may not be necessary to include the source command as it seems ansible logs in with a non-interactive shell so the contents of
.bashrc
are never picked up.See also https://stackoverflow.com/questions/22256884/not-possible-to-source-bashrc-with-ansible