I fixed an issue today earlier today but i'm interested in learning WHY it worked. We set up a new Hyper-V virtual machine only to discover that HTTP traffic wasn't working. HTTPS, pings, everything else was working fine.
After months of prodding around I took a shot in the dark. On the Hyper-V host server, the physical NIC card had an advanced setting of "Max Ethernet Frame Size" set to 1500. After setting this setting to 1514 the issue was fixed. Alternatively, setting this to 1512 did not solve the issue; 1514 is the magic number.
My best guess it that when this setting was set to 1500 it was allowing incoming pings because the data payload was a lot smaller of say, HTTP traffic. As far as HTTPS traffic, I read about something called "Path MTU discovery" which i'm going to assume why is HTTPs traffic was getting through fine, albeit slower.
Looking at this post, people agree that 1518 is the max total frame size. Why didn't I need to change this to 1518 instead of 1514 bytes? Why is the default frame size 1500 if that's the max size of the Ethernet payload and not the max size.
1518 is the maximum frame size for "old school" 802.3 Ethernet. If frame checksum sequence is offloaded to the NIC then 1514 is the maximum frame size (since the 4-byte frame checksum will be added by the NIC).
1500 is the IP maximum transmit unit (MTU) for Ethernet, since 1500 bytes of payload are available in a 1518 byte Ethernet frame. Setting the NIC's maximum frame size to 1500 bytes would result in an IP MTU of 1482 bytes.
It's unclear to me why anybody would have changed the NIC's maximum frame size to 1500 bytes. I suspect somebody confused maximum frame size with MTU. Nobody would mean to set that in the way you're describing.