We have a rather extensive set of yaml configuration files that we use to define the deployments, stateful sets, namespaces, services, etc. resources that should be created in the kubernetes API on a cluster.
We have experimented with a few tools like terraform and ansible for deploying the underlying compute and k8s cluster, and they work fine for applying configuration management at that level.
What I haven't found is a good way to intelligently automate the deployment and updating of these resources. We use source control to manage changes to these resource definitions and feed those changes into the test and production clusters with kubectl apply -f
.
Often the change is something simple like updating the image tag of for a pod in a deployment. In this case a simple patch of the proper image
property on the deployment is all that is needed.
For some of the cannonical resources I've played a bit with kubernetes terraform provider. It's rather cool as it is property aware and can do things like decide between an entity teardown/rebuild and a simple patch.
It falls down, however, in it's speed of development. It is hard to do anything beyond vanilla release k8s. This makes the tool useless for custom resources like used with operators. There are similar providers that will apply your k8s yaml definitions by shelling out to kubectl
, but they are not property-aware.
Any pointers on solutions would be appreciated before I start applying some bash-fu.
This sounds like a pretty standard use case for helm. While on one hand it’s a useful package manager for kubernetes, on the other hand my team had had a ton of success using it to manage our kubernetes API objects in our cluster. You would want to put your manifests in one or more “Charts” and helm will use the Chart to install and/or update all of your Deployments, ConfigMaps, etc. for you. You can even specify different values files per environment. We have a values.prod.yaml, values.staging.yaml, and so on that contains all of our environment specific configurations, so we can reuse the same manifest files across all environments.