We do special hardware configurations that require the heavy use of LLDP. We have a few new racks of servers that all use the Intel X710 10Gb network card. LLDP suddenly stopped working. Our implementation of LLDP is simple. Enable LLDP on the TOR (top of rack) switch using default TLVs. Enable LLDP on the Linux image using lldpad (CentOS 6.5) and use lldptool to extract neighbor information, which has worked for thousands of machines in the past. Only, for these machines with these NICs, the whole thing just stopped working.
Use of packet dumps from the switches and the server showed that frames were properly sent to the switch from the servers and conversely, the switches were properly receiving frames from the servers and sending TLV frames back to the servers. The servers were not receiving the switch frame TLVs, though, leaving us scratching our heads. We placed other machines using different NICs on the TOR and they get LLDP data as expected.
I asked the Googles...
According to this link it seems that these X710s are probably running an internal LLDP agent, which is intercepting LLDP frames from the switch. The firmware on the affected machines we're seeing this occur is:
# ethtool -i eth2
driver: i40e
version: 1.3.47
firmware-version: 4.53 0x80001e5d 17.0.10
bus-info: 0000:01:00.2
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes
The method to disable the internal LLDP agent on the NIC does not work. Nevertheless, I'm still digging around, but I figure I have a few options:
- find the correct way to disable the internal LLDP agent on the NIC and use the existing method of extracting LLDP data on these machines -- preferred.
- Use the NIC LLDP agent and find a way to extract the neighbor TLVs from the NIC.
Has anyone else experienced the same or similar issues with these cards and if so, how did you get around the problem?
I figure that if I wanted to use the internal agent data that it would be exposed via ethtool
or snmp
, but I have been unsuccessful as yet at finding a way to surface the information.
TIA
EDIT For the record, when I attempt the steps outlined in the Intel forums, I get the following output:
root@host (~)# find /sys/kernel/debug/
/sys/kernel/debug/
root@host (~)# mkdir /sys/kernel/debug/i40e
mkdir: cannot create directory `/sys/kernel/debug/i40e': No such file or directory
OK. So the Googles came through for me. Here's how to fix the issue.
Turns out that in order to use the debug filesystem, it needs to be mounted first. We're using a memfs OS to run commands on the machines we're tuning and by default we don't mount debugfs. So this script gave me the answer I needed.
...and the following steps for my use case worked:
yielding:
Other helpful links:
http://comments.gmane.org/gmane.linux.network/408868 https://communities.intel.com/thread/87759 https://sourceforge.net/p/e1000/mailman/message/34129092/
And my Google search
Created an init script to do this on start up of a machine. Any pull requests appreciated.
If anyone knows how to tell the status of the embedded lldp agent it would be appreciated. This could be adapted for systemd with some better exit codes.
https://github.com/timhughes/i40e-lldp-agent/
It's a Firmware feature that can be toggled off
Since October 13, 2017, Intel released a version of their driver 2.3.6 that support toggling off the LLDP handling using a private-flag. This is done by executing the following command:
<interface name>
with your interface name. (example -eth0
)Download Intel's Driver i40e for X710/ XL710 version 2.3.6
Installation Instructions (source)
This is from Intel's commit:
As the ethtool toggle does not seems to be persistent across reboots we've setup following udev rule.
/etc/udev/rules.d/10-disable-fw-lldp.rules: