I'm setting up a file server with zfs send (using zrep) to sync data to a slave. The idea being that I can manually failover to the slave if and when needed. The servers have multiple network interfaces and three are configured as a private connection between the two servers with a dedicated vlan, possibly routed through different switches but I'm not sure on that (they're located in separate buildings and networking falls outside my responsibilities). It seems to make sense to aggregate these interfaces and send the zfs send stream over the aggregated interface. However, I don't have control over the switches so can't configure LACP or whatever is needed on them. Is that required for the Solaris link aggregation to correctly balance the data across the three interfaces? Presumably IPMP isn't going to help a whole lot because only the outgoing data is spread over three interfaces and the incoming data on the slave will have to come in on a single interface, right?
Another alternative might be to use an SCTP connection, taking advantage of the multi-homing to spread the data across the three links. Has anyone tried this? Is there an existing program to create such a connection? socat and ncat appear to be able to make sctp connections but it isn't clear from their manpages whether they can setup multi-homed connections.
Finally, has anyone tried configuring jumbo packets for a network interface used for zfs send/receive? I'm inclined to assume it would improve throughput but would it? Or is that a bad idea due to e.g. increased latencies?
0 Answers