I have a kubernetes setup running nicely, but I can't seem to expose services externally. I'm thinking my networking is not set up correctly:
kubernetes services addresses:
--service-cluster-ip-range=172.16.0.1/16
flannel network config:
etcdctl get /test.lan/network/config {"Network":"172.17.0.0/16"}
docker subnet setting:
--bip=10.0.0.1/24
Hostnode IP:
192.168.4.57
I've got the nginx service running and I've tried to expose it like so:
[root@kubemaster ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-px6uy 1/1 Running 0 4m
[root@kubemaster ~]# kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S) AGE
kubernetes component=apiserver,provider=kubernetes <none> 172.16.0.1 443/TCP 31m
nginx run=nginx run=nginx 172.16.84.166 9000/TCP 3m
and then I exposed the service like this:
kubectl expose rc nginx --port=9000 --target-port=9000 --type=NodePort
NAME LABELS SELECTOR IP(S) PORT(S) AGE
nginx run=nginx run=nginx 9000/TCP 292y
I'm expecting now to be able to get to the nginx container on the hostnodes IP (192.168.4.57
) - have I misunderstood the networking? If I have, can explanation would be appreciated :(
Note: This is on physical hardware with no cloud provider provided load balancer, so NodePort is the only option I have, I think?
You don't have to use NodePort and you don't have to use external load balancer. Just dedicate some of your cluster nodes to be loadbalancer nodes. You put them in a different node group, give them some labels:
mynodelabel/ingress: nginx
, and than you host an nginx ingress daemonset on that node group.Most important options are:
and
Optionally you can taint your loadbalancer nodes so that regular pods don't work on them and slow down the nginx.
Expect to read the pod on hostIP:NodePort, where you can find the node port of a service with:
kubectl get svc echoheaders --template '{{range .spec.ports}}{{.nodePort}}{{end}}'
You can deploy an ingress controller such as: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx or https://github.com/kubernetes/contrib/tree/master/service-loadbalancer
A NodePort service is the most common solution for a small/local bare-metal cluster, and the same port will be available on all the nodes where your jobs are running (i.e. probably not your master, but worker nodes) that are running
kube-proxy
.There are some contrib/not-obvious code that acts like a LoadBalancer for smaller networks so that if you want to use type: LoadBalancer locally as well as in the cloud, you can get roughly equivalent mechanics if that's important.
Ingress controllers become significantly useful over NodePorts when you want to mix and match services (specifically HTTP services) exposed from your cluster on port 80 or 443, and are built to specifically support more than one service through a single endpoint (and potentially, a single port - mapped to separate URI paths or the like). Ingress controllers don't help so much when the access you want is not HTTP based (for example, a socket based service such as Redis or MongoDB, or maybe something custom you are doing)
If you're integrating this into an internal IT project, then many commercial load balancers recommend fronting the NodePort configurations with their own loadbalancer technology, and referencing the pool of all worker nodes in that setup. F5 has a reasonable example of this in their documentation.