Currently I'm working on a small hobby project which I'll make open source once it's ready. This service is running on Google Container Engine. I chose GCE to avoid configuration hassle, the costs are affordable and to learn new stuff.
My pods are running fine and I created a service with type LoadBalancer
to expose the service on port 80 and 443. This works perfectly.
However, I discovered that for each LoadBalancer
service, a new Google Compute Engine load balancer is created. This load balancer pretty expensive and really over done for a hobby project on a single instance.
To cut the costs I'm looking for a way to expose the ports without the load balancer.
What i've tried so far:
Deploy a
NodePort
service. Unfortunately it's disallowed to expose a port below 30000.Deploy an Ingress but this also creates a load balancer.
Tried to disable
HttpLoadBalancing
(https://cloud.google.com/container-engine/reference/rest/v1/projects.zones.clusters#HttpLoadBalancing) but it still creates a load balancer.
Is there a way to expose port 80 and 443 for a single instance on Google Container Engine without a load balancer?
Yep, through externalIPs on the service. Example service I've used:
Please be aware that the IPs listed in the config file must be the internal IP on GCE.
In addition to ConnorJC's great and working solution: The same solution is also described in this question: Kubernetes - can I avoid using the GCE Load Balancer to reduce cost?
The "internalIp" refers to the compute instance's (a.k.a. the node's) internal ip (as seen on Google Cloud Platform -> Google Compute Engine -> VM Instances)
This comment gives a hint at why the internal and not the external ip should be configured.
Furthermore, after having configured the service for ports 80 and 443, I had to create a firewall rule allowing traffic to my instance node:
After this setup, I could access my service through http(s)://externalIp
If you only have exactly one pod, you can use
hostNetwork: true
to achieve this:Note that by doing this your pod will inherit the host's DNS resolver and not Kubernetes'. That means you can no longer resolve cluster services by DNS name. For example, in the example above you cannot access the
static
service at http://static. You still can access services by their cluster IP, which are injected by environment variables.This solution is better than using service's externalIP as it bypass kube-proxy, and you will receive the correct source IP.
To synthesize @ConnorJC @derMikey's answers into exactly what worked for me:
Given a cluster pool running on the Compute Engine Instance:
I made the service:
and then opened the firewall for all(?) ips in the project:
and then
my-app
was accessible via the GCE Instance Public IP34.56.7.001
(not the cluster ip)I prefer not to use the cloud load balancers, until necessary, because of cost and vendor lock-in.
Instead I use this: https://kubernetes.github.io/ingress-nginx/deploy/
It's a pod that runs a load balancer for you. That page has GKE specific installation notes.