I have a java application that is failing with
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) ~[na:1.8.0-internal]
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:477) ~[na:1.8.0-internal]
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:287) ~[na:1.8.0-internal]
at org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:455) ~[tomcat-embed-core-8.5.27.jar!/:8.5.27]
at java.lang.Thread.run(Thread.java:785) [na:1.8.0-internal]
my limit for open files is 30K
$ ulimit -a
...
open files (-n) 30480
...
I'm wondering now, what is the proper check...
lsof | grep 123 | wc -l (123 is pid of java application) returns number 45633.
How it could be over 30K?
On the other hand lsof -p 123| wc -l returns only 771 - it's not over the limit.
Can someone help me to understand what's going on here? What am I missing? Is the limit sum for all user processes (that's what I'd expect)?
This is RedHat 7, Lsof revision 4.87.
edit: ok, I believe I know what is the discrepancy - lsof -p ... shows open files for parent only while there are subprocesses
Thanks to a comment from @schily I found that limit per subprocess is 4K only (and not 30K).
procfssupport, you could use the commandpfileswith the process ID of the failing process as argument to get the relatedrlimit, but it seems that you are on Linux. – schily Apr 28 '20 at 10:42pfiles/rlimitnot available for me... – Betlista Apr 28 '20 at 10:57rlimitis not a command but a UNIX feature since 1979.pfilesneedsprocfsand since you are on Linux, you only have something that is remotely similar toprocfsfrom it's inventor Roger Faulkner. But I just discovered that Linux has/proc/<pid>/limits. – schily Apr 28 '20 at 11:01/proc/<pid>/limitshelped, I can see there 4K max, I was not able so far to find (google it) how to increase it for subprocess... Seems the limits are not applied somehow... – Betlista Apr 28 '20 at 12:33