I'm pretty sure we aren't the only site using jumbo frames (~9k), right? Well, for those of you that are doing it as well, what are you doing about virtualization? Namely:
- Xen doesn't support packets over 1500 bytes on bridged interfaces. Assigning a real interface to each VM might work, but is a non-starter for me.
- KVM will do it if I futz around with the source. Otherwise I can get up to 4k packets. Messing with the source isn't something I really want to do (good-bye upstream patches without rebuilding!)
- VMWare doesn't mention it either way. Their VSphere pricing turns me off, but maybe I can just do ESX(,i)?
I'm not using jumbo packets for iSCSI or NFS. I'm really moving a ton of data between nodes, and upping my MTUs has helped with speeds there. My platform is CentOS 5.x, and I'd prefer to stay with that, but I suppose other options are possible? You tell me!
Anyone doing something clever I'm not thinking of?
[Edit]
Why do I want this? Well, my existing machines all use MTUs of 9000, and the place where that happens is in our clustering layer. If I add a new machine that doesn't speak jumbo packets it can't join the cluster, and it doesn't work. So while I would love to revisit the issue of "do we actually need jumbo packets?", that's a much bigger project than just bringing a new machine online. New machines have to be able to talk to the cluster. Right now that means deploying on bare hardware, and that sucks.
For ESXi 4 standard Virtual Switches you have to do this from a CLI. If you use the (unsupported) pseudo-console mode or the (supported) VMA the relevant command is:
Replace vSwitch0 with the relevant Virtual Switch ID's and repeat as necessary for all vSwitches that you need to enable for 9K Jumbo frames.
In larger (much larger) environments where you are using Distributed Virtual Switches you can change the MTU from the vSphere Client GUI.
In my experience, jumbo frames are really far from being usable. The offloading technology is a mess, especially the stuff b-com provides, and the switches can't support it well enough.
I for VMs especially, I'd stick with normal MTU sizes, and improve speeds by using mode-4 bonding or switching to 10G or even infiniband.
Having said that, afaik kvm's virtio_net drivers aren't really limited in speed, so in spite of being 1G, they can easily go beyond, given the bandwidth.
Not a direct answer as such but if you're moving lots of data between multiple nodes have you considered Infiniband? it's great for that kind of thing.