We are running containers on Kubernetes on Amazon AWS. This cluster was created via the kube-up set of scripts. Everything was provisioned correctly and working fine. We ran into a snag however - our fairly large servers, c4.xlarges, are only allowed to run 40 pods. This is a small number for us, as we are running many small pods, some rarely used. Is there a way to up this limit from the salt master or the launch configuration? What is the best route to go about doing this?
Thanks.
Fixed it! I think I made a pretty good fix too, from what I can tell. I'm new to Salt, but feel I have a pretty decent grasp on it now. It's much simpler and less intimidating than I thought, a really neat tool. Anyway, onto the fix:
Kubernetes provisions the master and the minions with salt. Salt's config files are located at
/srv/salt
. After looking at thetop.sls
file, I found thekubelet
folder (we need to change the flags passed to the kubelet). Poking through that folder we find theinit.sls
file, pointing to thekubelet.service
file that is using thesalt://kubelet/default
for the config. Perfect, that's just the/srv/salt/kubelet/default
file. Yuck. Lots of mega conditionals, but it all boils down to that last lineDAEMON_ARGS=...
If we want to do this the right way, modify the last line to add another variable:(Notice our new variable max_pods)
This config file has access to our pillar files, the "config" files of salt. These are located right next to the salt configs in
/srv/pillar
. After verifying all these configs are passed to all hosts, we can modify one. I thought it best fit incluster-params.sls
:I killed off my old nodes and now our nodes have a max of 80 pods per host instead of the default.