An application running a lightly loaded Apache HTTPD 2.0 has occasionally had problems where one (or more?) of the Apache processes took 100% CPU. We currently run HTTPD 2.2, I we may have seen this with 2.2 as well. I'm not certain. In some cases, the CPU usage was such that it blocked all but console access to the Windows server hosting HTTPD. I have never been able to track down what can cause Apache to do this.
The environment is Apache HTTPD directly serving static content, using mod_rewrite but not much else custom configuration. HTTPD is talking to Apache Tomcat (5.x) via mod_jk
(1.2.25).
Has anyone else encountered this and solved it? The workaround we installed is to limit each Apache HTTPD subprocess to a maximum number of requests with the following configuration:
MaxRequestsPerChild 1000
where because the application uses HTTP/1.1, this is really more than 1000 requests per child process and more like 100,000 requests per child process.
It's most likely that the lock-up is happening in a module rather than in Apache itself. Your setup sounds pretty minimal, so I'd suspect
mod_jk
as the culprit. If limitingMaxRequestsPerChild
fixes the problem then I'd say that's an acceptable workaround. It's possible that a bug in the module is only triggered after a long time or many requests, and unless you're really keen on tracking this down then making it go away is probably good enough.If you want to track it down then the first thing to do is configure
CoreDumpDirectory
to point to some location that the server user can write to. If you can get the offending process to leave a core file behind then it should help you track down the cause of the problem. You can find some hints on doing this in the Apache Debugging Guide.When I've seen this it has been because: - a hosted app or script is causing the problem. Example, it has an infinite loop or something - the OS has become unstable, due to locking or some other issue where rebooting temporarily solved the problem.
My suggestion: - reboot the machine. - wait and see if this happens again - restart the server with no mods,etc. - Start turning on each mod one by one and each time observe the usage.
Limiting the MaxRequestsPerChild will help with memory usage but it shouldn't effect the cpu in the way you're talking about. What's likely to be happening is that your mod_jk is crashing and since it's an apache module it shows up under the httpd process.
I've actually seen this happen when you have a log directory that doesn't exist. I'm not sure why they don't handle that better but you may want to make sure that all the log directories are there and the process can write to them.
install mod_proctitle for apache
RLimitCPU doesn't always help because not all portions of the apache code have checks for it.
MaxRequestsPerChild may not help either, as I've seen this with relatively 'fresh' children.
In my case, I suspect it's something to do with the module we're using (mod_perl) and perhaps something with a broken socket connection. We only seem to see this problem with browsers connecting, not from wget or curl (which we use heavily for 'data delivery').
I was also facing the same issue until I fount out the root cause of the problem...
Problem: Currently, my website is running on Wordpress with XAMPP on windows cloud server and the CPU usage is going 100%
Solution: checked my Apache log file(access.log) and some one continuously was trying to access the xmlrpc.php file and 10 requests every seconds that makes the Apache server busy processing the incoming request, so here I would suggest you to block incoming access to the xmlrpc.php file from your .htaccess file, also blocked the IP's from my hosting provider, as a result the CPU usage is now 3-5% max.
Note: this solution is for the websites running Wordpress
http://devslounge.com/htaccess/can-cause-apache-httpd-use-100-cpu-indefinitely/
https://wordpress.org/support/topic/xmlrpcphp-attack-on-wordpress-38