I am looking for options to increase the speed between buildings, ideally without having to replace existing fiber.
Current situation:
- Multi-building campus, with Gbps fiber links between buildings.
- Netgear switches with the fiber plugged into netgear GBICs on those switches.
- HP 2510G Layer 3 switch is the "core".
- An unused fiber pair exists between the server room and one other building
To mitigate the risk of building/sprinkler/fire issues causing data loss, we want to move the backup server to another building on our campus. However, the backup server has (2) GbE NICs teamed in order to allow the backups to complete during the nightly window.
I don't think the core HP switch can support any ports higher than 1GbE. My initial thought is to put in a matching HP at the other end, team two Gb fiber ports, and then team two Gb ports for the backup server at the other end.
But I am wondering about other options that may be faster, more resilient, or less expensive.
If your switches support it (sounds like this is doubtful), then you could just upgrade to 10G optics and you'd be all set. If that's not an option, you could look into getting a couple CWDM mux/demux boxes to put on either end of the fiber run. That way, you could split out two or more 1G streams, each into their own wavelength.
You've got a tough problem here. I suspect that the only solution is going to be a core upgrade to support 10Gbps ports, because the only other option that comes to mind (and which you alluded to in your question) -- link aggregation -- doesn't work very well when you've only got one endpoint. Every switch's implementation of such things that I've seen has annoyingly limited options for balancing the traffic between the several links -- usually limited to source/dest MAC addresses. This is annoying to me, because the Linux channel bonding driver can do straight round-robin, which would work well enough in this sort of situation.
If you do find that your switches will all do that mode (I haven't played with Netgear switches, only Cisco Catalyst and HP Procurve), then it'd definitely be worth giving it a go. Since the device on the far end is a backup server, and hence the traffic should be (to my estimation) fairly asymmetric, making the balancing over the link be the source MAC address might work, too, since (I presume) the traffic is coming from a wide range of sources. I wouldn't just leap to that as a solution, because all of your return traffic will definitely end up on one link (one source MAC address for all that), and if you're running close to 2Gps total, you'll end up with one clogged link).
A possible other option, if you're open to oddball solutions, would be to split off backups onto more than one machine. That'll (tend) to balance the traffic by virtue of the different MAC addresses (although the hashing algorithm can cause problems by "accidentally" hashing all the MACs to the same link, which is never fun).
If all that doesn't work, I think you're up for a core network upgrade. Switches capable of handling 10G optics aren't that expensive these days (a lot better than 48 port 10G copper switches, anyway...)