I'm running an nginx instance that's acting as an SSL termination point and basic load balancer on an EC2 virtual server, and seeing very poor performance for the SSL pages served from the upstream source.
The EC2 instance is a c1.medium, and ought to be able to sustain a reasonable throughput, but I can't get it above 60 transactions per second.
Serving the nginx status page directly off the server, I manage more than ten times the throughput, so it's not purely SSL overhead, but if I reconfigure it to serve the same content without SSL I also do very much better, so it's also not upstream overhead. The CPU is maxed out while it's serving its 60 transactions per second.
I'm using ab to test it, with parameters "-n 1000 -c 50 -k" -- 1000 hits, concurrency of 50, keepalives enabled so that the SSL session caching ought to work.
Here's an abbreviated config:
user www-data;
worker_processes 4;
pid /var/run/nginx.pid;
events {
worker_connections 4096;
}
http {
sendfile off;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/json;
log_format standard '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent"';
access_log /var/log/nginx/access.log standard;
error_log /var/log/nginx/error.log;
gzip on;
gzip_types text/plain application/json;
gzip_comp_level 1;
upstream test {
server 10.226.31.66;
}
server {
listen 443;
ssl on;
ssl_certificate /etc/nginx/certs/both.crt;
ssl_certificate_key /etc/nginx/certs/https.key;
ssl_session_timeout 10m;
ssl_session_cache shared:SSL:10m;
ssl_protocols SSLv2 SSLv3 TLSv1;
ssl_ciphers ALL:!kEDH:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
ssl_prefer_server_ciphers on;
location /v1/system/nginx {
stub_status on;
allow all;
}
location /nginx_status {
stub_status on;
allow all;
}
location / {
proxy_pass http://test;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_buffering on;
proxy_connect_timeout 15;
proxy_intercept_errors on;
}
}
}
I've found the same, with HTTP going at 500 hits/sec (bandwidth limited) and HTTPS barely managing 10 (CPU limited). I don't have a solution to that as such, but as a workaround that could have lots of benefits and few drawbacks, have you considered terminating SSL using amazon's elastic load balancers? They seem much faster with only a few pennies extra expense, cheaper than buying more instances due to CPU shortage at least.