I'd like to use CoreOS as my base OS going forward.
I run a lot of apps in multiple data centers, and I fully believe that I should containerize the lot. This has thrown up a lot of questions, notably control and access of resources.
My dream - is to have a cluster that runs my apps across multiple hosts and scales as required. When I (or the team) wish to make any changes to the cluster, we set a flag or variable in etcd which should trigger a script that updates the cluster - which I feel is possible
My fear comes when I realise that we have a lot of none routable networks that where data is stored, and multiple locations across the world -- Am I going to have to make all these networks routable, if I am to utilize the distributed keychain?
If so, then this kills my dream of running a local instance of CoreOS and connecting it to the cluster, and having access to all the information without having to actually log into a production cluster member?
I hope this makes sense -- I guess I'd like to control my cluster by sending requests to an end point, rather than having to be locally present on the cluster to make the changes. This allows an easy integration path for our existing control scripts and automated systems, which I'd really do not want to have to rebuild all of that!
You could setup a VPN across the cluster to use as your private network for etcd and the like.
Another option is to build a small app that that acts as the endpoint and talks to the various data centers for you. This is might scale better but it depends on your needs.