I have a VMWare VM on an ESX host that has LVM partitions. I've configured kdump with very basic configuration - ext /dev/mapper/logical-volume-name
and path /data/crash
. When I force a system crash, it loads the kdump kernel and it shows the logical volumes, loads them and shows that it is saving the memory dump, and takes about 30 seconds to export about 2.5GB, which is normal. The problem is, when it reboots, it comes back up, but no network connectivity (can't get out, and can't get in via network), however, all network services are running and a simple reboot (no crash) fixes this. Additionally, another issue - the vmcore memory dump saved was not actually saved. I've tested this on a VM on VirtualBox and this works running a CentOS equivalant of the RHEL server that is currently in production that is currently having these kdump issues.
Any ideas or things I should look in to?
I spoke to one of the developers for KDump and confirmed that this is a known bug. At this time, I cannot make any changes to production, but have a very suspicious idea that either the kernel needs to be upgraded and/or something between the VM and the VM Host need looked into.