I'll add another heplful suggestion to the thread, something I just learned few minutes ago:
If you already started a process that will take a long time (tar restore in my case), and forgot to include nohup in front of it, you can still prevent it from terminating on logoff.
Here are the steps:
Press Ctrl + Z - this will suspend the job
Enter disown -h %x, where x is the job number you get after suspending the job
As Warner says above, the child of the ssh daemon (that is the login shell and it's children) will get SIGHUP, but they won't get them instantly. There will be a delay, sometimes in minutes before the ssh daemon on the server side gives up on your connection. During that time the process will continue to run.
As Warner also said, the process can choose to ignore SIGHUP, in which case, it will continue to run until it has to request input and then find STDIN has closed.
Not generally, although not knowing what shell or even what OS you're running, it's tough to say. If you were running it under screen, for example, I think the behavior is to detach and continue running. Maybe some other shells have that as a configurable option.
This Q&A predates the systemd v230 debacle. As of systemd v230, the new default is to kill all children of a terminating login session, regardless of what historically valid precautions were taken to prevent this. The behavior can be changed by setting KillUserProcesses=no in /etc/systemd/logind.conf, or circumvented using the systemd-specific mechanisms for starting a daemon in userspace. Those mechanisms are outside the scope of this question.
The text below describes how things have traditionally worked in UNIX designspace for longer than Linux has existed.
They will get killed, but not necessarily immediately. It depends on how long it takes for the SSH daemon to decide that your connection is dead. What follows is a longer explanation that will help you understand how it actually works.
When you logged in, the SSH daemon allocated a pseudo-terminal for you and attached it to your user's configured login shell. This is called the controlling terminal. Every program you start normally at that point, no matter how many layers of shells deep, will ultimately "trace its ancestry" back to that shell. You can observe this with the pstree command.
When the SSH daemon process associated with your connection decides that your connection is dead, it sends a hangup signal (SIGHUP) to the login shell. This notifies the shell that you've vanished and that it should begin cleaning up after itself; what happens at this point is shell specific (search its documentation page for "HUP"), but for the most part it will start sending SIGHUP to running jobs associated with it before terminating. Each of those processes, in turn, will do whatever they're configured to do on receipt of that signal. Usually that means terminating. If those jobs have jobs of their own, the signal will often get passed along as well.
The processes that survive a hangup of the controlling terminal are ones that either disassociated themselves from having a terminal (daemon processes that you started inside of it), or ones that were invoked with a prefixed nohup command. (i.e. "don't hang up on this")
Terminal multiplexers are a common way of keeping your shell environment intact between disconnections. They allow you to detach from your shell processes in a way that you can reattach to them later, regardless of whether that disconnection was accidental or deliberate. tmux and screen are the more popular ones; syntax for using them is beyond the scope of your question, but they're worth looking into.
It was requested that I elaborate on "how long it takes for the SSH daemon to decide that your connection is dead". This is behavior which is specific to every implementation of a SSH daemon, but you can count on all of them to terminate when either side resets the TCP connection. This will happen quickly if either side attempts to write to the socket and the TCP packets are not acknowledged, or slowly if neither side is attempting to write to the PTY.
In this particular context, the factors most likely to trigger a write are:
A process (typically the one in the foreground) attempting to write to the PTY on the server side (server->client).
The user attempting to write to the PTY on the client side (client->server).
Keepalives of any sort. These are usually not enabled by default, either by the client or the server, and there are typically two flavors: application level and TCP based (i.e. SO_KEEPALIVE). Keepalives amount to either the server or the client infrequently sending packets to the other side, even when nothing would otherwise have a reason to write to the socket. While this is typically intended to skirt firewalls that time out connections too quickly, it has the added side effect of causing the sender to notice when the other side isn't responding that much more quickly.
The usual rules for TCP sessions apply here: if there is an interruption in connectivity between the client and server, but neither side attempts to send a packet during the problem, the connection will survive provided that both sides are responsive afterwards and receiving the expected TCP sequence numbers.
If one side has decided that the socket is dead, the effects are typically immediate: the sshd process will send HUP and self-terminate (as described earlier), or the client will notify the user of the detected problem. It's worth noting that just because one side thinks the other is dead does not mean that the other is has been notified of this, and will typically remain open until either it attempts to write to the connection or it receives the TCP reset from the other side. (if connectivity was available at the time)
Warner's answer is spot on. Alternately, you could also execute the command in the background, by appending an ampersand (&) to the end of the command.
In most cases, no. Processes will be sent a SIGHUP on loss of terminal. You can prefix a command with 'nohup' to ignore the signal. See:
http://en.wikipedia.org/wiki/Nohup
I'll add another heplful suggestion to the thread, something I just learned few minutes ago:
If you already started a process that will take a long time (tar restore in my case), and forgot to include nohup in front of it, you can still prevent it from terminating on logoff.
Here are the steps:
As Warner says above, the child of the ssh daemon (that is the login shell and it's children) will get
SIGHUP
, but they won't get them instantly. There will be a delay, sometimes in minutes before the ssh daemon on the server side gives up on your connection. During that time the process will continue to run.As Warner also said, the process can choose to ignore
SIGHUP
, in which case, it will continue to run until it has to request input and then findSTDIN
has closed.Not generally, although not knowing what shell or even what OS you're running, it's tough to say. If you were running it under screen, for example, I think the behavior is to detach and continue running. Maybe some other shells have that as a configurable option.
I found the best answer at link below
Does getting disconnected from an SSH session kill your programs?
Edit for 2016:
This Q&A predates the systemd v230 debacle. As of systemd v230, the new default is to kill all children of a terminating login session, regardless of what historically valid precautions were taken to prevent this. The behavior can be changed by setting KillUserProcesses=no in /etc/systemd/logind.conf, or circumvented using the systemd-specific mechanisms for starting a daemon in userspace. Those mechanisms are outside the scope of this question.
The text below describes how things have traditionally worked in UNIX designspace for longer than Linux has existed.
They will get killed, but not necessarily immediately. It depends on how long it takes for the SSH daemon to decide that your connection is dead. What follows is a longer explanation that will help you understand how it actually works.
When you logged in, the SSH daemon allocated a pseudo-terminal for you and attached it to your user's configured login shell. This is called the controlling terminal. Every program you start normally at that point, no matter how many layers of shells deep, will ultimately "trace its ancestry" back to that shell. You can observe this with the pstree command.
When the SSH daemon process associated with your connection decides that your connection is dead, it sends a hangup signal (SIGHUP) to the login shell. This notifies the shell that you've vanished and that it should begin cleaning up after itself; what happens at this point is shell specific (search its documentation page for "HUP"), but for the most part it will start sending SIGHUP to running jobs associated with it before terminating. Each of those processes, in turn, will do whatever they're configured to do on receipt of that signal. Usually that means terminating. If those jobs have jobs of their own, the signal will often get passed along as well.
The processes that survive a hangup of the controlling terminal are ones that either disassociated themselves from having a terminal (daemon processes that you started inside of it), or ones that were invoked with a prefixed nohup command. (i.e. "don't hang up on this")
Terminal multiplexers are a common way of keeping your shell environment intact between disconnections. They allow you to detach from your shell processes in a way that you can reattach to them later, regardless of whether that disconnection was accidental or deliberate. tmux and screen are the more popular ones; syntax for using them is beyond the scope of your question, but they're worth looking into.
It was requested that I elaborate on "how long it takes for the SSH daemon to decide that your connection is dead". This is behavior which is specific to every implementation of a SSH daemon, but you can count on all of them to terminate when either side resets the TCP connection. This will happen quickly if either side attempts to write to the socket and the TCP packets are not acknowledged, or slowly if neither side is attempting to write to the PTY.
In this particular context, the factors most likely to trigger a write are:
A process (typically the one in the foreground) attempting to write to the PTY on the server side (server->client). The user attempting to write to the PTY on the client side (client->server).
Keepalives of any sort. These are usually not enabled by default, either by the client or the server, and there are typically two flavors: application level and TCP based (i.e. SO_KEEPALIVE). Keepalives amount to either the server or the client infrequently sending packets to the other side, even when nothing would otherwise have a reason to write to the socket. While this is typically intended to skirt firewalls that time out connections too quickly, it has the added side effect of causing the sender to notice when the other side isn't responding that much more quickly.
The usual rules for TCP sessions apply here: if there is an interruption in connectivity between the client and server, but neither side attempts to send a packet during the problem, the connection will survive provided that both sides are responsive afterwards and receiving the expected TCP sequence numbers.
If one side has decided that the socket is dead, the effects are typically immediate: the sshd process will send HUP and self-terminate (as described earlier), or the client will notify the user of the detected problem. It's worth noting that just because one side thinks the other is dead does not mean that the other is has been notified of this, and will typically remain open until either it attempts to write to the connection or it receives the TCP reset from the other side. (if connectivity was available at the time)
Warner's answer is spot on. Alternately, you could also execute the command in the background, by appending an ampersand (&) to the end of the command.