For testing purpose, I have installed ubuntu 21 on vmware esxi server. On that machine, spinned up kubernetes using lxc containers following this repository LXC is spinned up and running.
adminuser@testing:~/Desktop$ lxc list
+----------+---------+-------------------+-----------------------------------------------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+----------+---------+-------------------+-----------------------------------------------+-----------+-----------+
| kmaster | RUNNING | 10.8.0.217 (eth0) | fd42:666f:471d:3d53:216:3eff:fe54:dce6 (eth0) | CONTAINER | 0 |
+----------+---------+-------------------+-----------------------------------------------+-----------+-----------+
| kworker1 | RUNNING | 10.8.0.91 (eth0) | fd42:666f:471d:3d53:216:3eff:fee4:480e (eth0) | CONTAINER | 0 |
+----------+---------+-------------------+-----------------------------------------------+-----------+-----------+
| kworker2 | RUNNING | 10.8.0.124 (eth0) | fd42:666f:471d:3d53:216:3eff:fede:3c9d (eth0) | CONTAINER | 0 |
+----------+---------+---------------
Then started deploying metallb on this cluster using the steps mentioned in this link. And applied this configmap for routing. GNU nano 4.8 k8s-metallb-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 10.8.0.240-10.8.0.250
But the metallb pods are not running.
kubectl get pods -n metallb-system
NAME READY STATUS RESTARTS AGE
controller-6b78bff7d9-cxf2z 0/1 ContainerCreating 0 38m
speaker-fpvjt 0/1 CreateContainerConfigError 0 38m
speaker-mbz7b 0/1 CreateContainerConfigError 0 38m
speaker-zgz4d 0/1 CreateContainerConfigError 0 38m
I checked the logs.
kubectl describe pod controller-6b78bff7d9-cxf2z -n metallb-system
Name: controller-6b78bff7d9-cxf2z
Namespace: metallb-system
Priority: 0
Node: kworker1/10.8.0.91
Start Time: Wed, 14 Jul 2021 20:52:10 +0530
Labels: app=metallb
component=controller
pod-template-hash=6b78bff7d9
Annotations: prometheus.io/port: 7472
prometheus.io/scrape: true
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/controller-6b78bff7d9
Containers:
controller:
Container ID:
Image: quay.io/metallb/controller:v0.10.2
Image ID:
Port: 7472/TCP
Host Port: 0/TCP
Args:
--port=7472
--config=config
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
METALLB_ML_SECRET_NAME: memberlist
METALLB_DEPLOYMENT: controller
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j76kg (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-j76kg:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 32m default-scheduler Successfully assigned metallb-system/controller-6b78bff7d9-cxf2z to kworker1
Warning FailedCreatePodSandBox 32m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a8a6fa54086b9e65c42c8a0478dcac0769e8b278eeafe11eafb9ad5be40d48eb": open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 31m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "264ee423734139b712395c0570c888cff0b7b526e5154da0b7ccbdafe5bd9ba3": open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 31m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "1a3cb9e20a2a015adc7b4924ed21e0b50604ee9f9fae52170c03298dff0d6a78": open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 31m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "56dd906cdadc8ef50db3cc725d988090539a0818c2579738d575140cebbec71a": open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 31m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "8ddcfa704da9867c3a68030f0dc59f7c0d04bdc3a0b598c98a71aa8787585ca6": open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 30m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "50431bbc89188799562c48847be90e243bbf49a2c5401eb2219a0c4745cfcfb6": open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 30m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "da9ad1d418d3aded668c53f5e3f98ddfac14af638ed7e8142b904e12a99bfd77": open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 30m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "4dc6109c696ee410c58a0894ac70e5165a56bab99468ee42ffe88b2f5e33ef2f": open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 30m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a8f1cad2ce9f8c278c07c924106a1b6b321a80124504737a574bceea983a0026": open /run/flannel/subnet.env: no such file or directory
Warning FailedCreatePodSandBox 2m (x131 over 29m) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "f5e93b893275afe5309eddd9686c0ecfeb01e91141259164082cb99c1e2c1902": open /run/flannel/subnet.env: no such file or directory
And the speaker container.
kubectl describe pod speaker-zgz4d -n metallb-system
Name: speaker-zgz4d
Namespace: metallb-system
Priority: 0
Node: kmaster/10.8.0.217
Start Time: Wed, 14 Jul 2021 20:52:10 +0530
Labels: app=metallb
component=speaker
controller-revision-hash=7668c5cdf6
pod-template-generation=1
Annotations: prometheus.io/port: 7472
prometheus.io/scrape: true
Status: Pending
IP: 10.8.0.217
IPs:
IP: 10.8.0.217
Controlled By: DaemonSet/speaker
Containers:
speaker:
Container ID:
Image: quay.io/metallb/speaker:v0.10.2
Image ID:
Ports: 7472/TCP, 7946/TCP, 7946/UDP
Host Ports: 7472/TCP, 7946/TCP, 7946/UDP
Args:
--port=7472
--config=config
State: Waiting
Reason: CreateContainerConfigError
Ready: False
Restart Count: 0
Environment:
METALLB_NODE_NAME: (v1:spec.nodeName)
METALLB_HOST: (v1:status.hostIP)
METALLB_ML_BIND_ADDR: (v1:status.podIP)
METALLB_ML_LABELS: app=metallb,component=speaker
METALLB_ML_SECRET_KEY: <set to the key 'secretkey' in secret 'memberlist'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l2gzm (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-l2gzm:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: kubernetes.io/os=linux
Tolerations: node-role.kubernetes.io/master:NoSchedule op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/network-unavailable:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 41m default-scheduler Successfully assigned metallb-system/speaker-zgz4d to kmaster
Warning FailedMount 41m kubelet MountVolume.SetUp failed for volume "kube-api-access-l2gzm" : failed to sync configmap cache: timed out waiting for the condition
Warning Failed 39m (x12 over 41m) kubelet Error: secret "memberlist" not found
Normal Pulled 78s (x185 over 41m) kubelet Container image "quay.io/metallb/speaker:v0.10.2" already present on machine
container state after setting the value from null to 0.
kube-apiserver-kmaster 1/1 Running 0 27m
kube-controller-manager-kmaster 1/1 Running 0 27m
kube-flannel-ds-7f5b7 0/1 CrashLoopBackOff 1 76s
kube-flannel-ds-bs9h5 0/1 Error 1 72s
kube-flannel-ds-t9rpf 0/1 Error 1 71s
kube-proxy-ht5fk 0/1 CrashLoopBackOff 3 76s
kube-proxy-ldhhc 0/1 CrashLoopBackOff 3 75s
kube-proxy-mwrkc 0/1 CrashLoopBackOff 3 76s
kube-scheduler-kmaster 1/1 Running 0 2
I don't have access to VMWare toolset, but I tried to replicate your setup as closely as possible.
In my case the
kube-proxy-*
andkube-flannel-ds-*
pods were inCrashLoopBackOff
status. Failing withThis prevented metallb pods from starting.
To make it work I edited
kube-proxy
configMapand changed
to
Then deleted all
kube-proxy
andkube-flannel-ds
pods, which were immedietly recreated by DaemonSet.Then deleted all metallb pods, which were also recreated by DeamonSet
Seems now everything works.
I also created
/run/flannel/subnet.env
file manually, with contents:but it may not be necessary
I Solved it by manually creating the correct secret key named
memberlist
instead ofmetallb-memberlist
as follow:I install metallb before I install ingress-nginx
I just ignore that error. After I install ingress-nginx, the error dissapear.
-bino-