I would like to build a Docker container and then run it in GKE after mounting some directories from GCE persistent disks (PDs). For instance, I'd like for the application's (read-write) configuration files in /etc/<application>/
to live longer than its pods (which may restart at any time.)
The regular build puts default configuration files into /etc/<application>/
and it is imperative that these are somehow "copied" once from the image's ephemeral disk into the PD, such that the application can start in its expected environment.
Is there a best practice for making this happen? For instance, would I have to mount PDs also in my Dockerfile
, or can I somehow request that PDs be "synced" with files from another directory/volume/disk when they are first mounted by a VM instance during deployment?
The obvious answer is to populate each persistent disk immediately after creating it.
If the application configs change from build to build, and they must match the running build, then there's an unresolved problem about what to do if multiple app versions share the same PD and conflict over what should be stored there.
If you don't need to worry about cross-version PD sharing, then you can initialize the contents of the PD using a job running in the application's pod. Kubernetes has a feature called init containers designed to make this easier; but it's still alpha at the time of writing.
I did not hear about best practices, so this is what I have adopted for now:
docker build
the image withDockerfile
that also tars e.g./etc/<application>/
into<application>.tar
after it has done its other build stepsdocker run
the image andscp
tar files off the running imagescp
tar files to the VM instance;gcloud compute ssh
into tit, mount the PD, and untar needed files below mount point