The small college where I work is having some very strange network issues. I'm looking for any advice or ideas here. We were fine over the summer, but the trouble began few days after students returned to campus in force for the fall term.
Symptoms
The main symptom is that internet access will work, but it's very slow... often to the point of timeouts. As an example, a typical result from Speedtest.net will return .4Mbps download, but allow 3 to 8 Mbps upload speed. Lesser symptoms may include severely limited performance transferring data to and from our file server, or even in some cases the inability to log in to the computer (cannot reach the domain controller). The issue crosses multiple vlans, and has effected devices on nearly every vlan we operate.
The issue does not impact all machines on the network. An unaffected machine will typically see at least 11Mbps download from speedtest.net, and perhaps much more depending on larger campus traffic patterns at the time.
There is one variation on the larger issue. We have one vlan where users were unable to log into nearly all of the machines at all. IT staff would log in using a local administrator account (or in some cases cached credentials), and from there a release/renew or pinging the gateway would allow the machine to work... for a while. Complicating this issue is that this vlan covers our computer labs, which use software called Deep Freeze to completely reset the hard drives after a reboot. It could just the same issue manifesting differently because of stale data on machines that have not permanently altered low-level info for weeks. We were able to solve this, however, by creating a new vlan and moving the labs over to the new vlan wholesale.
Instigations
Eventually we noticed that the effected machines all had recent dhcp leases. We can predict when a machine will become "slow" by watching when a dhcp lease comes up for renewal. We played with setting the lease time very short for a test vlan, but all that did was remove our ability to predict when the machine would become slow. Machines with static IPs have pretty much always worked normally. Manually releasing/renewing an address will never cause a machine to become slow. In fact, in some cases this process has fixed a machine in that state. Most of the time, though, it doesn't help. We also noticed that mobile machines like laptops are likely to become slow when they cross to new vlans. Wireless on campus is divided up into "zones", where each zone maps to a small set of buildings. Moving to a new building can place you in a zone, thereby causing you to get a new address. A machine resuming from sleep mode is also very likely to be slow.
Mitigations
Sometimes, but not always, clearing the arp cache on an effected machine will allow it to work normally again. As already mentioned, releasing/renewing a local machine's IP address can fix that machine, but it's not guaranteed. Pinging the default gateway can also sometimes help with a slow machine.
What seems to help most to mitigate the issue is clearing the arp cache on our core layer-3 switch. This switch is used for our dhcp system as the default gateway on all vlans, and it handles inter-vlan routing. The model is a 3Com 4900SX. To try to mitigate the issue, we have the cache timeout set on the switch all the way down to the lowest possible time, but it hasn't helped. I also put together a script that runs every few minutes to automatically connect to the switch and reset the cache. Unfortunately, this does not always work, and can even cause some machines to end up in the slow state for a short time (though these seem to correct themselves after a few minutes). We currently have a scheduled job that runs every 10 minutes to force the core switch to clear it's ARP cache, but this is far from perfect or desirable.
Reproduction
We now have a test machine that we can force into the slow state at will. It is connected to a switch with ports set up for each of our vlans. We make the machine slow by connecting to different vlans, and after a new connection or two it will be slow.
It's also worth noting in this section that this has happened before at the start of prior terms, but in the past the problem has gone away on it's own after a few days. It solved itself before we had a chance to do much diagnostic work... hence why we've allowed it to drag so long into the term this time 'round; the expectation was this would be a short-lived situation.
Other Factors
It's worth mentioning that we have had about half a dozen switches just outright fail over the last year. These are mainly 2003/2004-era 3Coms (mostly 4200's) that were all put in at about the same time. They should still be covered under warranty, buy HP has made getting service somewhat difficult. Mostly in power supplies that have failed, but in a couple cases we have used a power supply from a switch with a failed mainboard to bring a switch with a failed power supply back to life. We do have UPS devices on all but three of four switches now, but that was not the case when I started two and a half years ago. Severe budget constraints (we were on the Dept. of Ed's financially challenged institutions list a couple years back) have forced me to look to the likes of Netgear and TrendNet for replacements, but so far these low-end models seem to be holding their own.
It's also worth mentioning that the big change on our network this summer was migrating from a single cross-campus wireless SSID to the zoned approach mentioned earlier. I don't think this is the source of the issue, as like I've said: we've seen this before. However, it's possible this is exacerbating the issue, and may be much of the reason it's been so hard to isolate.
Diagnosis
At first it seemed clear to us, given the timing and persistent nature of the problem, that the source of the issue was an infected (or malicious) student machine doing ARP cache poisoning. However, repeated attempts to isolate the source have failed. Those attempts include numerous wireshark packet traces, and even taking entire buildings offline for brief periods. We have not been able even to find a smoking gun bad ARP entry. My current best guess is an overloaded or failing core switch, but I'm not sure on how to test for this, and the cost of replacing it blindly is steep.
Again, any ideas appreciated.
Update:
Core switch is replaced. After 4 days, everything is running well... but I'll wait for the two week mark before calling the issue resolved.
Joel,
Since you have trunks setup and can duplicate the issue at will. Install Wireshark on a laptop and mirror/span a uplink port. If you see the packet rate over 10,000 or port utilization near max speed you have problem.
You might have a bad hardware/spanning tree issue. Normally I've found users plugging in both nics on their machine "to get more throughput".
Normally for Spanning tree issues you can turn on Loop detect or broadcast limiting on per port from your vendor. This will kill any port with a loop found. You can also turn on "bpdu protection" which means to disable the port the bpdu was received on and throw an error to the syslog/snmp trap receivers.
Joe
I have seen issues similar to this before and it has been a loop in the LAN, which causes chaos and saturation of the entire subnet (presumably from broadcast traffic due to the switch seeing it's own MAC on an additional port).
EDIT: Also, this is common at educational establishments (two of my previous sysadmin jobs) as the little darlings like to mess around with patch cables/sockets...
Sounds to me as you got some bad hardware which causing broadcast storms. Use Wireshark to watch for broadcasts and find a host which gives you trouble...
Joe's idea is a good one, but given that it's not likely to be a broadcast storm creating your issue (i think you're on the right track with ARP cache poisoning or a similar issue; it might even be an IP address conflict), it probably won't solve the problem.
A related technique to use dynamic ARP and DHCP inspection, if your switches support it. If you turn this on, the switches will watch DHCP transactions, and only allow ARP entries which match the known entries in the DHCP database, or ones which you have manually specified.
If your switches don't have this feature, another option for tracking it down is the Linux utility arpwatch - it keeps track of all the ARP requests and tells you when it notices an IP-MAC mapping change.