For testing, I've changed my global open file limit to 3000:
#sysctl -w fs.file-max=3000
fs.file-max = 3000
#cat /proc/sys/fs/file-nr
2016 0 3000
I've created some files:
i=1; while [ "$i" -le 1000 ]; do : >> "$i"; i=$(($i + 1)); done
I've held them open:
i=1; while [ "$i" -le 1000 ]; do less "$i" & ; i=$(($i + 1)); done
I've seen the chaos I just created:
ksh: /bin/less: cannot execute [Too many open files in system]
I know I hit the limit....
# cat /proc/sys/fs/file-nr
3008 0 3000
If I now raise the open file limit up (so I can ssh in using another console) and if I check on one of the recent less's I spawned after setting the open file max to 3000, I see:
# cat /proc/28282/limits | grep 'Max open files'
Limit Soft Limit Hard Limit Units
Max open files 1024 16384 files
The "Hard limit" is still set high, though no mention of 3000. So we hit the system limit, not the per process limit.
Why would'nt a newly created process be instructed to inherit 3000 vs 16384?
I logg'ed in in a new terminal with a new shell, so why wouldn't my shell be told 3000 and pass that down to less?
This is a 2.6.32 kernel
Per this link Ulimit file descriptor limits not being applied for particular process
I quote:
"/etc/security/limits.conf is part of pam_limits and so the limits that are set in this file is read by pam_limits module during login sessions. The login session can be by ssh or through terminal"
/etc/sysctl.conf is a system wide global configuration, we cannot set user specific configuration here. It sets the maximum amount of resource that can be used by all users/processes put together"
/etc/sysctl.conf is not the right place for setting this.
I would need to fully investigate /etc/security/limits.conf and "pam_limits"