I have the following problem. I use Ansible to run an installation script on remote servers, and the script reboots the server in the end. As a result, the task ends with "Failed to connect to the host via ssh: Connection closed" failure. I use ignore_unreachable: true
to prevent the task from failing, but my main problem is that I would like to be able to see stdout of the task. For some reason, both stderr and stdout don't seem to be present for tasks that ended with connection closed by the server.
To make things worse, the install script runs as a non-privileged user, but at some point asks for sudo password, and for various reasons I can't use NOPASSWD
option in sudoers config for this user. To work adound this, I eventually decided to use expect
module to run the script:
- name: Launch installer script
expect:
command: "./install.sh"
chdir: "{{ installer_directory }}"
creates: "{{ product_base_directory }}/uninstall.sh"
responses: "{{ sudo_password_prompt | items2dict}}"
timeout: 1200 # 20 minutes
ignore_errors: true
ignore_unreachable: true
register: install_script_output
- name: "Display installer script output"
debug:
msg:
stdout: "{{ install_script_output.stdout_lines | default([]) }}"
stderr: "{{ install_script_output.stderr_lines | default([]) }}"
failed_when: "{{ not install_script_output.msg.startswith('Failed to connect') }}"
If the install script fails, the next task is perfectly able to access stdout_lines
and stderr_lines
fields of the registered variable. However, if the script succeeds and the remote server goes offline, the variable contains nothing but two fields: failed
and msg
.
{
"failed": true,
"msg": "Failed to connect to the host via ssh: System is going down. Unprivileged users are not permitted to log in anymore. For technical details, see pam_nologin(8).\n\nConnection closed by 10.10.10.10 port 22"
}
Is there a way to view stdout
of such tasks that result in closing the connection with the remote host?
That's correct and the expected behavior. It is obviously because of the fact that Ansible won't able to transfer the information back from the Remote Node to the Control Node since a reboot was triggered and therefore the Remote Code executed got terminated externally.
It depends on how one look at the problem. Out-of-box and without implementing workarounds, no. This is because of the fact that
stdout
was hold in memory only on the Remote Node and not logged.Similar Q&A
stdout
Further Reading
... in case of the development and debugging scripts or Custom Modules.
async
module to capture its output on runtimeHow to proceed further?
Since it seems that you have access to installer script, the easiest way to deal with the use case seems to be to change the behavior of the installer script itself. During this it might also be possible to implement logging if not already available. From the script perspective logging local into in example
/var/log/script.stdout.log
andscript.stderr.log
, which could befetch
ed orslurp
ed on the Control Node if necessary.One could also simply move the reboot out of the script and let the script run end before rebooting. By doing this the requested information will be collected. It would only be necessary to implement a single additional Ansible task to reboot the Remote Node afterwards.