We have a setup that looks like this:
nginx->haproxy->app servers
We are terminating SSL with nginx and it sits in front of everything. During our peak load times, we are experiencing about a 2x performance hit. Requests that would normally take 400 ms are taking 800ms. It's taking longer for the entire Internet.
The problem is, I have absolutely no sign of any slowdowns in my logs and graphs. New Relic shows all the app servers are responding correctly with no change in speed. Nginx and haproxy show nothing in their logs about requests slowing down, but we are slowing down. Despite nginx showing that a particular request I tracked is taking 17ms through the entire stack, it took 1.5 seconds to curl it during peak load last week.
So, that leaves me with two options: 1) Network issues - I have more than enough pipe left according to graphs from the router. I'm only using 400 Mbps out of the 1 Gbps port and there are no errors in ifconfig or on the switch or routers. However, SoftLayer manages this gear, so I can't verify this personally. It could be on our side because of the kernel as well I suppose, so I'm posting my sysctl values below:
2) nginx is holding up the request and either not logging it or I'm not logging the right thing. Is it possible that requests are being queued up because workers are busier and they're not getting acted on as quickly? If this is in fact happening, what can I log in nginx other than $request_time, since that is showing no slowdown at all. And, if this is possible that requests are actually taking longer than $request_time is indicating, how do I go about tweaking the config to speed things up?
Sysctl
net.ipv4.tcp_syncookies = 0
net.ipv4.tcp_synack_retries = 2
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.tcp_fin_timeout = 3
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 16777216 16777216 16777216
net.ipv4.tcp_wmem = 16777216 16777216 16777216
net.ipv4.tcp_max_tw_buckets = 16777216
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_syn_backlog = 262144
net.core.somaxconn = 262144
net.core.netdev_max_backlog = 15000
net.core.netdev_budget = 8196
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.ip_nonlocal_bind = 1
Applicable nginx configuration
user www-data;
worker_processes 20;
worker_rlimit_nofile 500000;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
use epoll;
multi_accept off;
accept_mutex off;
worker_connections 65536;
}
You can add queue time to your newrelic graphs:
in nginx configuration at your SSL terminator add to server block:
So X-Request-Start header will contain time in microseconds and when this request will get to newrelic agent, it will update the graphs. Make sure the time well synced both at the balancer and backend servers.
ps. 000 trick is needed because $msec in nginx is in MILLIseconds and newrelic agent expects data in MICROseconds.
if you take the highest conncurrent connections during peak-times and multiply this value by by 1.5, can you asure that the connection-pool of your loadbalancer & app-servers are not exhausted? do you monitor app-server-/ha-proxy-response-time? can you asure that your app-servers are not the issue?