We have Nginx in the front of our system and we Proxy to Apache at the back. We use SSL/TLS for our connection.
Question:
- Is Nginx the best option to terminate SSL/TLS connections in terms of performance / SSL Handshake?
- Am I doing all the needed performance tweaks? Can I still improve my code?
Here's my config:
ssl_certificate /path/ssl.crt;
ssl_certificate_key /path/ssl.key;
ssl_dhparam /path/dh.pem;
ssl_buffer_size 4k;
ssl_session_timeout 4h;
ssl_session_cache shared:SSL:20m;
ssl_session_tickets on;
ssl_trusted_certificate /path/trust.crt;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
I use Mozilla SSL Configuration Generator to generate my ciphers below.
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
ssl_prefer_server_ciphers on;
The cipher was to accomodate most of the browsers. I also have Strict-Transport-Security set.
Our system runs on Amazon AWS with CloudFront. Currently it takes SSL Labs around 130 seconds to run a test. And Pingdom shows the SSL connection for one request takes at least 220ms.
Thanks!
I had some performance problems in nginx and incapsula and i found that the problem was related with the cipher used. Incapsula was connecting with DHE-RSA-AES128-SHA and that gave low performance and high load on the server. I use the "Intermediate list" in https://wiki.mozilla.org/Security/Server_Side_TLS and done some stress tests with the ciphers and got this results for the ones that worked:
So as you can see, DHE-* where working badly, but AES128-SHA is working fine. So if you think you have performance problems, build a stress test with a few hundred or thousands connections and configure nginx to just use one cipher. you should be able to see if any cipher is working bladly and try to disable it (do not forget to test the final setup against your clients, or use ssllabs test to see if you are not blocking your users)
A couple of other options: - Terminate SSL using CloudFront, call to your servers using http. Requests are traveling over the AWS network which is likely safer than the public internet, but you may not want to do this if your have personal or financial data. - Terminate SSL on ELB, call your servers using HTTP. This is inside your VPC and one region, so data is going across fewer links. Again, it may not be ideal depending on your data.
Right now you could terminating SSL on CloudFront, with CloudFront setting up another SSL session between its edge location and the origin server. This could be double SSL, increasing your latency, and could be the root cause of your problem. It would explain why your efforts haven't been as successful as you expected.