Say that I have an application running on one PC that is sending commands via SSH to another PC on the network (both machines running Linux).
For example every time something happens on #1, I want to run a task on #2. In this setup, I have to create SSH connection on every single command.
Is there any simple way to do this with basic unix tools without programming custom client/server application? Basically all I want is to establish a connection over SSH and then send one command after another.
Automatic Persistency Using OpenSSH
You can also use the
ControlMaster
feature of OpenSSH, which opens a unix domain socket for the first connection and reuses this connection in all subsequent calls.To enable the feature, you can either use
-M
as the command line switch or enable theControlMaster
option in your~/.ssh/ssh_config
, e.g.:Additionally, you should set the
ControlPath
using the following lines in your~/.ssh/ssh_config
:To maintain a persistent connection to a host, e.g. if you want to run a script which needs to establish many ssh connections to the host, none of which persistent over the whole lifetime of the script, you can start a silent connection in advance using:
Cheerio, nesono
Not sure if it can be used in production but you can do something like this:
create file on #1
1>
touch /tmp/commands
Then run command:
1>
tail -f /tmp/commands | ssh [email protected]
That will open file /tmp/commands and start sending its content to server x.x.x.x (#2) and run it there line by line
now, every time something happens on #1 do:
1>
echo "ls -l" >> /tmp/commands
or
1>
echo "reboot" >> /tmp/commands
whatever you add to file /tmp/commands will be sent to #2 and executed. Just make sure you do not run anything interactive, or deal with it somehow.
In
/etc/ssh/ssh_config
addIf you run into this sort of thing a lot, try Parallel. It is like dsh (distributed shell) but has some neat features like counting semaphores and it is actively maintained.
From the documentation:
EXAMPLE: GNU Parallel as queue system/batch manager
GNU Parallel can work as a simple job queue system or batch manager. The idea is to put the jobs into a file and have GNU Parallel read from that continuously. As GNU Parallel will stop at end of file we use tail to continue reading:
To submit your jobs to the queue:
You can of course use -S to distribute the jobs to remote computers:
There are many great examples that just scratch the surface. Here is a cool one.
EXAMPLE: Distributing work to local and remote computers
Convert *.mp3 to *.ogg running one process per CPU core on local computer and server2:
yes it is possible with a pipe:
this will execute the command echo "hi" on host2
you just have to write a programm, that just gives out the commands ( don't forget the ; ) and then pipe that output to ssh
You might want to use program like dsh (Distributed SHell) that is made to do just that :), After configuring it with host names and setting up publickeya auth, you can use it to run commands on multiple machines either in series ("run on machine a then run on machine b") or in pararell ("run on all machines on same time"). Or just make script
DISCLAIMER: I can't test this right now, as I'm on a windows machine with bash but without ssh.
In a bash script, you can do something like this:
And then to send commands, write them to FD 4.
If you do it this way, you can't access the results of the commands easily, but based on your question it doesn't seem like you need to.