I create a virtual machine and copy my DOCKERFILE
to it. The last line of the Docker file calls a shell script that runs Tensorflow and TensorBoard, which visualises the results of TensorFlow on port 6006:
tensorboard --logdir=/tmp/vae &
I SSH into the machine, build a docker image, and run docker wiring the Docker port to the virtual machine port:
docker run -it -p 6006:6006 imageID
And I see that TensorBoard is running:
TensorBoard 1.12.0 at http://2e4a59c22f1d:6006 (Press CTRL+C to quit)
I add a new inbound security rule for port 6006, so I can connect the IP of my local computer to port 6006 of the virtual machine.
I confirm that the ports are open with Python portping and confirm that port 6006 of the virtual machine is open to my local machine, and that port 6006 of the docker container is open to the virtual machine.
Yet, when I point a browser to the IP of the Azure virtual machine with suffix :6006
, I see nothing!
How can I view the TensorBoard running on the virtual machine?
The problem is that you are asking a virtual machine to do the job of a web app, which Azure does not allow.
One solution is to create a web app instead of creating a virtual machine. Another is to separate TensorFlow from TensorBoard, running the first on the virtual machine and the second on your local machine, periodically copying back the files that TensorBoard uses.
Web app solution
Build an image and push it to DockerHub or similar. These commands require you to have and authenticate a DockerHub account:
Create a web app on the Azure portal by clicking on "Create a resource", searching for "web app", and choosing one provider, such as "Web App" by Microsoft. Click "Create", then in the new blade that shows at the left, under "Publish" switch the toggle to "Docker Image", then under "Single container" and in the field "Image and optional tag
write
docker_username/image_name:image_tag`.In this solution, take note that Azure web sites do not have port 6006 open, which TensorBoard uses, so see this thread to redirect port 6006 to port 80.
Virtual machine + local machine solution
The advantage of this solution is that you have more control over the virtual machine, e.g. you can specify a GPU enabled virtual machine to speed up TensorFlow, which you cannot do that with an App Service Plan as of yet.
Create the virtual machine as you did, then run TensorFlow on it. Then periodically copy back the files from the container to the virtual machine and then from the virtual machine to your local machine.
Assuming that the Azure IP is 0.0.0.0, that you have only one container running, and that you have your results in
/tmp/vae
(for variational auto-encoder), the commands to copy back the files are:You can also run the two in a single line for an easy "arrow-up" repetition of command:
Then launch TensorBoard from your local machine in another shell: