I am currently doing some research for my company concerning Kubernetes. We want to evaluate hosting our own Kubernetes-cluster from our bare-metal servers inside our private computing centre. I only ever worked with managed Kubernetes (like Google Cloud etc.) and of course minikube for local testing.
Has anyone ever worked with a self-hosted and self-installed Kubernetes-cluster and give me some evaluation on Know-How and time needed to configure and administrate such cluster?
I have run some non-trivial number of clusters before GKE and EKS were a thing, although I'm thankful I was doing so in a IaaS setup, so I didn't have to rack things and if something went toes up, I just asked the cloud provider to kill it. There are two separate parts to your question, and they are distinct amounts of work: configuring, and administering
Configuring a cluster could happen in as little as 30 minutes after you have the machines in a shape that they will boot up and read their user-data (I presume even bare metal has a corresponding cloud-init scheme, even if less emphasis on the "cloud" part), thanks to the utterly magic kubeadm and its friend etcdadm
However, after kubernetes is up-and-running, that's when the real work starts -- often characterized as "day two" operations, and it's a thick book of things that need monitoring and things that can go toes up.
For absolute clarity, I don't mean to dissuade you: when the cluster(s) (is|are) in good shape, it's like magic and a startling number of things Just Work™. But, like many things magical, when they get angry, if you aren't already familiar with the warning signs or recognize the sound of their gunfire, it can make a frustrating situation even more frustrating.
It's that last part that will be the killer hurdle to overcome, IMHO, since -- like many pieces of software -- once you understand how they're glued together, troubleshooting them is generally a tedious but tractable problem. However, managing kubernetes itself is only one part of the toolset required to keep a kubernetes cluster alive:
kubelet
andcontainerd
)containerd
)Node
,Pod
,Deployment
,Service
,ConfigMap
,Secret
(including the 4 major sub-types ofSecret
);StatefulSet
s are optional, but handy to understand why they existRole
,RoleBindings
,ClusterRole
,ClusterRoleBindings
, and how auth makes it from an HTTPS request down into the apiserver's handler to get translated into aSubject
that can be evaluated against those policies, which like all good things related to "security" is its own bottomless well of standards to know and tools to troubleshootand while it might not affect you with a bare-metal setup, usually kubernetes interacts with the outside world via something like MetalLB and the CNI IPAM solution, and troubleshooting those requires knowing what kubernetes expects them to do, and then reconcile that with what they are actually doing
I personally have not yet taken the CKA but it may behoove you to at least go through the curriculum to get a sense for what kinds of topics the CNCF considers essential knowledge. If you're not spooked, then, hey, maybe you can get your CKA out of this exercise, too :-)
Good hunting!