I'm facing an issue here and after four days of looking around I decided to ask for some help here, after all a million heads can think a lot better than one.
I have a Ubuntu 14.04 server setup with NGINX, HHVM, PHP5-FPM (as a backup), Percona MySQL, Memcached (which will be replaced by Redis). I have fastcgi_cache setup for WordPress and object caching done over memcached. All cool and dandy in theory, but not in practice.
This is a RamNode OpenVZ SSD VPS with 2GB of RAM and an Intel Xeon E5 with two cores for my VPS.
Running Blitz.io on it the server is getting absolutely murdered by the two NGINX worker processes, which each one using 100% CPU according to top and htop. I usually run with the following pattern:
--pattern 999-1000:60 https://www.geeksune.com/blog/hello-world/
That makes makes CPU go to the roof and according to Blitz.io this is the result of that:
135 HITS WITH 57,734 ERRORS & 234 TIMEOUTS
Obviously that isn't good. RAM usage stay under 250MB all the time and it seems that all those requests from Blitz.io are hitting the cache, as seen here:
54.232.204.19 - HIT [23/Nov/2014:19:06:32 -0200] "GET / HTTP/1.1" 200 7632 "-" "blitz.io; [email protected]"
Notice the HIT at the start. I set a new log format and added $upstream_cache_status to it.
A similar setup on the same machine works just fine with Blitz.io, so there is definitely something wrong with my NGINX setup and it seems related to fastcgi_cache. I have the same results every time, even with just PHP5-FPM with Zend.
Does anyone have a clue about what is happening? My configuration files look like this:
- /etc/nginx/nginx.conf - http://paste.ubuntu.com/9236266/
- /etc/nginx/sites-available/geeksune.com - http://paste.ubuntu.com/9236282/
- /etc/nginx/conf.d/includes/ssl.inc - http://paste.ubuntu.com/9236298/
- /etc/nginx/conf.d/includes/security.inc - http://paste.ubuntu.com/9236321/
- /etc/nginx/conf.d/includes/caching.inc - http://paste.ubuntu.com/9236353/
- /etc/nginx/conf.d/includes/locations.inc - http://paste.ubuntu.com/9236366/
Thanks in advance.
:)