For about the last 6 months, and for about a year before that (with a 6 month hiatus), one of my servers has had a consistently high load average:
13:37:34 up 192 days, 5:44, 2 users, load average: 2.00, 2.01, 2.00
Per another answer, I checked the output of ps:
$ ps -eo stat,pid,user,command | egrep "^STAT|^D|^R"
STAT PID USER COMMAND
D< 3043 root /sbin/modprobe -Q pci:v00008086d0000293Esv000015D9sd0000D780bc04sc03i00
D< 3150 root /sbin/modprobe -Qba pnp:dPNP0401
Checking the config & loaded modules:
$ modprobe -c | grep "pnp:dPNP0401"
alias pnp:dPNP0401* parport_pc
$ sudo modprobe -l | grep parport_pc
/lib/modules/2.6.24-29-server/kernel/drivers/parport/parport_pc.ko
So it appears to be a parallel port rule, but I can't think of what might be connected, or why. Physical access to the server is about 2 hours drive away.
Operating system is Ubuntu 8.04.4.
I can't see anything obvious anywhere in /etc/ but I may not know what I'm looking for.
Any clues as to what might be causing this, and where this modprobe rule may have come from?
Check your udev rules to see if that pci string shows up. In addition have a look at your pci bus, see if that string shows up in there, you may need something like "lspdi -vvv" with grep to find it and start back tracking.
If you're feeling very adventurous try running the modprobe command in question with strace and see where it's hanging, that may or may not give you some additional clues.
Lastly, when was the last time you patched the system?