I have a home Kubernetes cluster that runs in 4 VMs on top of Proxmox. Proxmox is tagged to VLAN 20, the Kubernetes VMs are tagged to VLAN 40.
The Kubernetes VMs are BGP neighbors of my router so that I can tag pods to then run on one of two other VLANs that are designated as DMZ spaces, 50 and 60. In short, the network looks like this:
- VLAN1: Networking Hardware
- VLAN20: Physical Machines
- VLAN40: Kubernetes VMs
- VLAN50: Internal Kubernetes Deployments
- VLAN60: External Kubernetes Deployments
This works great, everything is able to communicate with one-another and the internet just fine. With one exception, performance.
My Proxmox server also acts as my storage server by advertising a ZFS pool as an NFS server. This works great, and is capable of some pretty fast reads and writes for a home storage server. Upwards of 6Gb/s reads, for example.
When I used to run Docker containers directly on my Proxmox server, virtual switching allowed the containers to interact with the NFS server hosted by Proxmox by hostname at nearly that speed.
Furthermore, before I set up VLANs, the Kubernetes VMs used to run on the same VLAN (1) as Proxmox itself. And any pods that were deployed on Kubernetes were also able to interact with the NFS server hosted by Proxmox by hostname at nearly that speed.
However, now that I have configured VLANs and use BGP to provision my Kubernetes pods on separate VLANs from the hosts, networking has been capped at 1Gb/s, if not worse than that.
My Ubiquiti Edgerouter Lite and Unifi Switch 8 are both 1Gb devices, so it makes sense. However, this is starting to feel very painful in my lab. For example, cover art in Plex Media Server takes upwards of 10 seconds to load when I scroll in my library because Kubernetes volume mounts the database on the NFS server. Similarly, Deluge is acting incredibly poorly. The web interface crashes frequently and any sort of action such as opening the Preferences panel or trying to see the Details section of a new torrent can take several minutes! Deluge's cache settings are set to use 4GB of memory, but I'm unsure if these performance issues are because of my network or because Deluge just doesn't scale well to 1100 torrents. Lastly, sometimes my Kubernetes deployments that interact heavily with a database (Plex, Jira, etc) end up with a corrupted database after a few weeks of running. This is presumably because of network latency, but I'm not sure.
I'm looking for a few questions to be answered with this post:
I know my network is complex, especially for a homelab. However, my homelab is used pretty much entirely for learning for my job. And the hobby is fun for me, especially when I cater to obscene levels of complexity. However, I'm just curious if everything seems like it is configured correctly to you, given the fact that I am okay with the complexity.
Would purchasing a 10Gb switch resolve this issue or would it also be necessary to purchase a 10Gb router since the Edgerouter is a BGP neighbor of the Kubernetes nodes?
If it would be necessary to purchase both a Switch and a Router, would it instead be possible to purchase a 10Gb switch with BGP capabilities?
What hardware would you recommend I purchase to resolve this issue? Ideally I would like to keep the total cost under $500-1,000 but it doesn't look like that would be possible given the incredibly high cost of 10Gb routers.
Would it be possible to use a different Kubernetes Storage Class for storing the data directly on the nodes? What would this look like?
Would you recommend a different solution to my problem?
In hindsight:
2-3. Yes it would resolve the issue provided the switch had BGP capabilities.
Yes, there are many of these.
hostPath
comes to mind. But also things like the dynamic ZFS provisioner offered by OpenEBS.Yes, handling network security with Istio.