I have a storage server running Nexenta (OpenSolaris kernel, Ubuntu userspace) with MySQL on top of a ZFS storage array, using innodb_file_per_table and ulimit -n set to 8K. mysqltuner.pl confirms the file limit and claims there are 169 files.
The following command:
pfiles `fuser -c / 2>/dev/null
indicates one mysqld process having 485 file/device descriptors (and they're almost all for files) so I don't know how reliable the tuning script is, but it is still way less than 8K and this list also finds no other process which is close to it's limit. The global total number of descriptors in use is around 1K.
So what can cause mysqld to be constantly streaming the following errors?
[date] [host] mysqld[pid]: warning: cannot open /etc/hosts.allow: Too many open files
[date] [host] mysqld[pid]: warning: cannot open /etc/hosts.deny: Too many open files
Everything appears to actually be operating fine, but the issue is constantly flooding the admin console and starts right away on a fresh boot (not only reproducible, but always from mysqld and always the hosts files, whose permissions are the default -rw-r--r-- 1 root root
). I could, of course, suppress it from the admin console but I'd rather get to the bottom of it and still allow mysqld warnings/errors to reach the admin console.
EDIT: not only is the actual file descriptor well within sane limits, the issue also persists (with immediate appearance) even with the file limit raised to 65535 and always only on hosts.allow/deny.
Given that Nexenta was OpenSolaris-based at that time, you may be running into this issue. It does seem like file descriptor limits aren't being applied in a consistent manner throughout the system.
Are you actually using hosts.allow/deny in your setup?
What does
lsof
output look like?