I rent a Linux VM from LayeredTech which is a guest on the Xen Hypervisor. The OS is CentOS 5.3, running Apache2. Nearly every day my server behaves in such a way that leads me to believe I am being DDoS'd, but I cannot find any evidence as such. I am running Apache Mod_security, MySQL 5.x, PHP 5.x, everything is up to date in terms of version.
The VM is relative low-powered but when this symptom is not happening it handles my web traffic load just fine.
My web server will become unresponsive and upon logging in there will be hundreds of HTTPD processes. All of my virtual hosts are chrooted and using SUexec, yet all of the spawned processes are running as the "apache" user.
The is no malicious website running on my box and the server shows no evidence of being compromised.
When the problem occurs my load averages are over 250, all I need to do is restart httpd forcefully, and everything is fine for anywhere between 24-72 hours.
I have looked in all of the log files I can think to look in, and I cannot find any evidence of a DDoS, any sort of "digg effect" sort of traffic, nothing. As soon as I restart HTTPD, whatever was causing it to spawn so many processes, stops. If it were due to a high traffic website, a frontpage link on a huge site, or a DDoS, I would imagine the requests would never stop and just hang my server up again right after restarting httpd.
I have also used various tools such as apachetop and other real-time monitoring tools, but I cannot usually predict when this will happen, and by the time it has happened, the server is far too overloaded to even to anything except kill HTTPD.
I am at a loss as to how to prevent this from happening, and I do not know where else to look as to the cause - Any ideas would be appreciated!
Additional Information:
It has been about 2 years since I built the server, and I configured these parameters based on some things I read, and never had a problem, but I am not sure if these settings could be a contributor:
# prefork MPM
# StartServers: number of server processes to start
# MinSpareServers: minimum number of server processes which are kept spare
# MaxSpareServers: maximum number of server processes which are kept spare
# ServerLimit: maximum value for MaxClients for the lifetime of the server
# MaxClients: maximum number of server processes allowed to start
# MaxRequestsPerChild: maximum number of requests a server process serves
<IfModule prefork.c>
StartServers 8
MinSpareServers 5
MaxSpareServers 20
ServerLimit 256
MaxClients 256
MaxRequestsPerChild 4000
</IfModule>
# worker MPM
# StartServers: initial number of server processes to start
# MaxClients: maximum number of simultaneous client connections
# MinSpareThreads: minimum number of worker threads which are kept spare
# MaxSpareThreads: maximum number of worker threads which are kept spare
# ThreadsPerChild: constant number of worker threads in each server process
# MaxRequestsPerChild: maximum number of requests a server process serves
<IfModule worker.c>
StartServers 2
MaxClients 150
MinSpareThreads 25
MaxSpareThreads 75
ThreadsPerChild 25
MaxRequestsPerChild 0
</IfModule>
You sure there's nothing in the logs?
You could configure Apache's MPM worker to restrict the number of processes it starts/manages.
There is also a little-known DDoS attack which keeps connections alive (HTTP's Keep-Alive mechanism) instead of closing them when it's finished, and this can cause hundreds of additional processes to start as Apache believes it's handling requests from these DDoS processes and generates new ones for new processes.
When you restart Apache, it kills off these rogue processes and hence the connections, so it depends how long it takes for the attacker to realise it's gotten disconnected and try again.
You could also enable the server-status and server-info handlers and watch them when it starts getting busy to identify what the server's doing.
http://httpd.apache.org/docs/2.2/mod/mod_status.html
http://httpd.apache.org/docs/2.2/mod/mod_info.html