I'm running a Linux 2.6.36 kernel, and I'm seeing some random errors. Things like
ls: error while loading shared libraries: libpthread.so.0: cannot open shared object file: Error 23
Yes, my system can't consistently run an 'ls' command. :(
I note several errors in my dmesg output:
# dmesg | tail
[2808967.543203] EXT4-fs (sda3): re-mounted. Opts: (null)
[2837776.220605] xv[14450] general protection ip:7f20c20c6ac6 sp:7fff3641b368 error:0 in libpng14.so.14.4.0[7f20c20a9000+29000]
[4931344.685302] EXT4-fs (md16): re-mounted. Opts: (null)
[4982666.631444] VFS: file-max limit 1231582 reached
[4982666.764240] VFS: file-max limit 1231582 reached
[4982767.360574] VFS: file-max limit 1231582 reached
[4982901.904628] VFS: file-max limit 1231582 reached
[4982964.930556] VFS: file-max limit 1231582 reached
[4982966.352170] VFS: file-max limit 1231582 reached
[4982966.649195] top[31095]: segfault at 14 ip 00007fd6ace42700 sp 00007fff20746530 error 6 in libproc-3.2.8.so[7fd6ace3b000+e000]
Obviously, the file-max errors look suspicious, being clustered together and recent.
# cat /proc/sys/fs/file-max
1231582
# cat /proc/sys/fs/file-nr
1231712 0 1231582
That also looks a bit odd to me, but the thing is, there's no way I have 1.2 million files open on this system. I'm the only one using it, and it's not visible to anyone outside the local network.
# lsof | wc
16046 148253 1882901
# ps -ef | wc
574 6104 44260
I saw some documentation saying:
file-max & file-nr:
The kernel allocates file handles dynamically, but as yet it doesn't free them again.
The value in file-max denotes the maximum number of file- handles that the Linux kernel will allocate. When you get lots of error messages about running out of file handles, you might want to increase this limit.
Historically, the three values in file-nr denoted the number of allocated file handles, the number of allocated but unused file handles, and the maximum number of file handles. Linux 2.6 always reports 0 as the number of free file handles -- this is not an error, it just means that the number of allocated file handles exactly matches the number of used file handles.
Attempts to allocate more file descriptors than file-max are reported with printk, look for "VFS: file-max limit reached".
My first reading of this is that the kernel basically has a built-in file descriptor leak, but I find that very hard to believe. It would imply that any system in active use needs to be rebooted every so often to free up the file descriptors. As I said, I can't believe this would be true, since it's normal to me to have Linux systems stay up for months (even years) at a time. On the other hand, I also can't believe that my nearly-idle system is holding over a million files open.
Does anyone have any ideas, either for fixes or further diagnosis? I could, of course, just reboot the system, but I don't want this to be a recurring problem every few weeks. As a stopgap measure, I've quit Firefox, which was accounting for almost 2000 lines of lsof output (!) even though I only had one window open, and now I can run 'ls' again, but I doubt that will fix the problem for long. (edit: Oops, spoke too soon. By the time I finished typing out this question, the symptom was/is back)
Thanks in advance for any help.
And another update: My system was basically unusable, so I decided I had no option but to reboot. But before I did, I carefully quit one process at a time, checking /proc/sys/fs/file-nr
after each termination. I found that, predictably, the number of open files gradually went down as I closed things down. Unfortunately, it wasn't a large effect. Yes, I was able to clear up 5000-10000 open files, but there were still over 1.2 million left. I shut down just about everything. All interactive shells, except for the one ssh I left open to finish closing down, httpd, even nfs service. Basically everything in the process table that wasn't a kernel process, and there were still an appalling number of files apparently left open.
After the reboot, I found that /proc/sys/fs/file-nr
showed about 2000 files open, which is much more reasonable. Starting up 2 Xvnc sessions as usual, along with the dozen or so monitoring windows I like to keep open, brought the total up to about 4000 files. I can see nothing wrong with that, of course, but I've obviously failed to identify the root cause.
I'm still looking for ideas, since I definitely expect it to happen again.
And another update, the next day:
I watched the system carefully, and discovered that /proc/sys/fs/file-nr
showed a growth of about 900 open files per hour. I shut down the system's only NFS client for the night, and the growth stopped. Mind you, it didn't free up the resources, but it did at least stop consuming more. Is this a known bug with NFS? I'll be bringing the NFS client back online today, and I'll narrow it down further.
If anyone is familiar with this behavior, feel free to jump in with "Yeah, NFS4 has this problem, go back to NFS3" or something like that.
After a little more testing, I believe this to be an NFS server bug. When a process on an NFS client places a write lock on a file, the server reserves an open file handle (this may be the wrong terminology -- my apologies to any actual kernel gurus reading this). This would probably be OK if the server released the handle when the lock is released, but it apparently doesn't.
My original problem occurred with rrdtool. rrdtool opens a file for read/write, locks the file for writing, makes its changes, and exits. Each time I run rrdtool, the number of open files on the server increases by one. (Nitpicky detail -- the server actually allocates in chunks of 32, so it's more like "32 runs make 32 open file descriptors", but that's an insignificant detail in the long run)
I wrote a minimal test program to verify this behavior. Indeed, opening the file, locking it, then exiting is sufficient to trigger this. Explicitly releasing the lock before exiting does not help in any way. Opening the file without locking it does not trigger the problem.
So far, I still have not found a way to release the resources on the server, other than rebooting. Restarting the NFS service is insufficient, as noted above.
I still haven't tested NFS version 3. Perhaps it works better.
Anyway, thanks for trying. Hopefully my experiences can be of some help to someone else in the future.
One last update: J. Bruce Fields, one of the NFSv4 developers, has confirmed that this is a bug, and says it's limited to NFSv4. Apparently I was the first to report it. He's hoping to have a patch soon.
Remember, kids: When you find a bug, find the proper place to report it, and there's a good chance it'll get fixed. Hurray for open source. :-)
See Use of NFS considered harmful, specifically point III.B. When your NFS client goes unavailable, its locks are not released, so the open files amount does not decrease. If you kick the NFS server (or more precisely, the lock daemon) you'll see the open file count decrease.
I think you can safely attribute the issue to whatever the NFS client is doing, which you haven't stated in the question above as far as I can see.
The
error loading shared libraries
errors are because you've reached the maximum number of files that can be open; when you runls
, the kernel tries to open a libraryls
is dynamically linked with; obviously this fails as you're at the max open files limit for that filesystem, hence the error.Something your client is doing is opening 900 files an hour. It's not a Mac running Spotlight on the NFS export, is it?
I am having the same issues. Installed HA server cluster which we use as a central network storage. On this DRBD cluster, NFS4 server is running.
Ever hour, we generate thousands of small data files and we store them on this NFS4 server.
From the moment we start the NFS4 server, it takes about 30 days until the fs.file-nr reaches the limit of 1.2 mil files and then, within 24 hour the whole machines crashes..
Just now, two hours after the HA backup machine took over after the crash, it shows
fs.file-nr = 19552 0 488906
increasing by rate +3000 in 20 minutes.
The HA backup machine was on standby for 30 days and it had 580 0 488906 all the time. Only thing what changed was that the NFS4 server was started.
I would be very happy, if there was a solution for this..
I am running MDV 2010 with, custom compiled x64 2.6.37 RC3 kernel