I have a strange requirement that I thought would be easy to do but turns out to be more complicated than I thought.
I run a heavily firewalled HPC scientific cluster. A user wants a Windows 7 VM to run a specific app. I created the VM on my Mac and then copied it to a Centos 7 server in the cluster. VirtualBox runs it fine as a headless server and says it is listening on port 3389.
To get to the cluster you first have to ssh to a login node, and from there you can get the rest of the cluster.
So, I have to create an SSH tunnel, right? Except the tunnel is not to the login node, it's to another server in the cluster after you get into the node. The VM is not running on the login server (that is a VM in and of itself). So I have to do double-forwarding?
I tried using localforward in the .ssh/config file and it doesn't quite work.
Now I'm thinking that I have to configure the login node to forward all traffic on port 3389 to the server running the vm.
Am I moving in the right direction or barking up the wrong tree?
Given that you could make the login server to forward traffic to the VirtualBox instance, I'm guessing that the VirtualBox server is only one hop away from the login server, and the login server can reach the VirtualBox instance on port 3389.
In this case, you don't need to do double forwarding. SSH can forward a port of any remote computer the SSH server is able to reach, so you have to do only:
It may be worthy of noting that the
host.which.runs.virtualbox:3389
part in the example above is relative to the computer you log in to. So if the login computer knows the host running VirtualBox asvboxrunner.local
, you can use that name in the forwarding, even if your local computer (the one you running ssh on) does not know anything about that name.