I'm running a Kubernetes bare metal install and I'm trying to make my test nginx application (simply created with kubectl create deployment nginx --image=nginx
) visible remotely from all nodes. The idea being I can then use a bare metal HAProxy installation to route the traffic appropriately.
From everything I've read this configuration should work and allow access via the port across nodes. Additionally, performing a netstat does seem to show that the nodeport is listening on all nodes -
user@kube2:~$ netstat -an | grep :30196
tcp6 0 0 :::30196 :::* LISTEN
My service.yaml file -
apiVersion: v1
kind: Service
metadata:
name: test-svc
namespace: default
spec:
type: NodePort
externalTrafficPolicy: Cluster
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
app: nginx
My node networking configuration -
kube1 - 192.168.1.130 (master)
kube2 - 192.168.1.131
kube3 - 192.168.1.132
My service running -
user@kube1:~$ kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 18m <none>
test-svc NodePort 10.103.126.143 <none> 80:30196/TCP,443:32580/TCP 14m app=nginx
However, despite all the above, my service is only accessible on the node it is running on (kube3/192.168.1.132). Any ideas why this would be or am I just understanding Kubernetes?
I'd had a look at load balancers and ingress but what doesn't make sense is if I routed all traffic to my master to distribute (kube1), what if kube1 went down? Surely I need a load balancer to target my load balancer?!
Hope someone can help!
Thanks, Chris.
If you want to expose service to outside cluster use service type either LoadBalancer or ingree. However is you use LoadBalancer approach has its own limitation. You cannot configure a LoadBalancer to terminate HTTPS traffic, virtual hosts or path-based routing. In Kubernetes 1.2 a separate resource called Ingress is introduced for this purpose. Here is example of LoadBalancer.
Post that test url
In order to access you local Kubernetes Cluster PODs a
NodePort
needs to be created. TheNodePort
will publish your service in every node using using its public IP and a port. Then you can access the service using any of the cluster IPs and the assigned port.Defining a NodePort in Kubernetes:
See a full example with source code at Building a Kubernetes Cluster with Vagrant and Ansible (without Minikube).
The nginx ingress controller can be replaced also with Istio if you want to benefit from a service mesh architecture for:
See Installing Istio in Kubernetes under VirtualBox (without Minikube).
Yet another option is to expose Nginx Ingress controller over NodePort (although not recommended for Production clusters). NodePort type still gives you the LoadBalancing capabilities, and to which specific Pod (backing the Service endpoints) the traffic should be sent, you control with 'service.spec.sessionAffinity' and Container Probes.
If you would have more than 1 replica of nginx Pod in your Deployment spec (example here), you could control pod to node assignment via pod affinity and anti-affinity feature.