I intend to deploy a k8s + Rancher cluster on my local network, but my environment has several VLANs, with pfsense acting as a firewal and router between such VLANs.
My cluster resides in XCP-NG as a hypervisor and I will inform the VLANs that it should pass on to the cluster nodes.
I intend to have some services in different VLANs, because I have VLAN for development, DMZ, production, management, etc., in that I would like to know if I have to take a different approach during the deployment of K8s + Rancher due to my environment?
To deploy a cluster that works with pods on multiple VLANs, must the cluster nodes have multiple NICs, each on a VLAN that I intend to use?
For example, if my cluster has 6 nodes, 3 master and 3 workers, must they be in the same VLAN or are they in different VLANs and having communication between them is enough?
If I want to deploy a pod on the development VLAN, and my cluster resides on the management VLAN, would that be possible?
Thanks in advance for your help.
This is not possible, kubernetes Clusters have it's own internal network. This network is completely segregated from your local network.
While deploying your kubernetes cluster (doesn't matter if it's rancher or any other on-premises kubernetes cluster) you can define on which CIDR your cluster will sit on.
You may be thinking: So if kubernetes has it's own network, how can I talk to the applications I deployed in my cluster?
You can expose your resources by using a Service or a Ingress. For example: When you create a service with
type: LoadBalancer
your service will allocate a external or public IP address (endpoint) that can be accessed from your internal network.As can be seen in the example above, there are two services with external IP defined.
In your scenario you need these External IPs to be IPs from your local network. This can be achieved using MetalLB.
In MetalLB you can define which IPs from your local network will be used. For example, the following configuration gives MetalLB control over IPs from
192.168.1.240
to192.168.1.250
:This is tying MetalLB to only one range and that's not what you need. So, please take a look at this article where it's explained how you can create IPPools and use them.