I need to deploy an application which works as a CCM (cloud controller manager), so it needs to have access to the master servers.
I have a K8S cluster that has been set up by Kubespray, all my nodes are running kubelet
that takes configuration from /etc/kubernetes/kubelet.conf
. The kubelet.conf
is shown below:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ***
server: https://localhost:6443
name: default-cluster
contexts:
- context:
cluster: default-cluster
namespace: default
user: default-auth
name: default-context
current-context: default-context
kind: Config
preferences: {}
users:
- name: default-auth
user:
client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
This configuration file and the certificates are being provided to the CCM service, I added the following volumes and mountpoints to the deployment YAML:
containers:
- name: cloud-controller-manager
image: swisstxt/cloudstack-cloud-controller-manager:v0.0.1
# Command line arguments: https://kubernetes.io/docs/reference/command-line-tools-reference/cloud-controller-manager/
command:
- /root/cloudstack-ccm
- --cloud-provider=external-cloudstack
- --cloud-config=/config/cloud-config
- --kubeconfig=/var/lib/kubelet/kubelet.conf # Connection Params
- --v=4
volumeMounts:
- name: config-volume
mountPath: /config
- name: kubeconfig-config-file
mountPath: /var/lib/kubelet/kubelet.conf
- name: kubernetes-pki-volume
mountPath: /var/lib/kubelet/pki
- name: kubernetes-config-volume
mountPath: /var/lib/kubernetes
volumes:
- name: config-volume
configMap:
name: cloud-controller-manager-config
- name: kubeconfig-config-file
hostPath:
path: /etc/kubernetes/kubelet.conf
- name: kubernetes-pki-volume
hostPath:
path: /var/lib/kubelet/pki
- name: kubernetes-config-volume
hostPath:
path: /var/lib/kubernetes
So far, so good.
My problem is that my kubelet.conf
is having the following sentence: .clusters.cluster.server: https://localhost:6443
. So, kubelet
is configured to interact with the master servers via a proxy-server that has been set up by Kubespray to distribute the connections between the master services.
So, when the CCM application inspect the kubelet.conf
it understands that it should communicate with the master servers via https://localhost:6443
, but inside of the pod of this application localhost:6443
is not being listened by this proxy server, so CCM can't use localhost:6443
to communicate with the master server, as localhost:6443
is accessible only from the node itself.
Here's the question: is there a way to make the node's localhost:6443
accessible from the pod? The only idea I have at this moment is to set up an SSH-tunnel between the pod and the node it's running at, but I don't like it, because (1) it requires to propagate some RSA-key on all the nodes and add it on every new node, (2) I have no idea on how to find out the IP-address of the node from behalf of a container.
Thanks for reading this rant. I'll be very grateful for all the ideas and clues.
0 Answers