Nowadays most sites redirect HTTP traffic to HTTPS for requests to their pages. However the same can't be said for assets (images, js, css). Most of the assets are available under both HTTP and HTTPS. Is there any particular reason that access to assets via HTTP is not redirected as is the case with the requests to pages? Why not force HTTPS everywhere?
Denis Pshenov's questions
I have a very simple proxy config:
http {
proxy_cache_path /var/www/cache levels=1:2 keys_zone=s3-images-cache:50m inactive=1M max_size=1000m;
proxy_temp_path /var/www/cache/tmp;
server {
listen 80;
server_name images.example.com;
location / {
proxy_cache s3-images-cache;
proxy_cache_key $scheme$proxy_host$uri$is_args$args;
proxy_cache_bypass $http_purge_cache;
proxy_cache_valid any 1y;
proxy_pass http://images-example.s3.amazonaws.com;
add_header X-Cache $upstream_cache_status;
proxy_intercept_errors on;
error_page 404 = @no_image;
}
location @no_image {
return 403;
}
}
}
Now follow me here:
- Let's request /image.jpg.
- Request is sent to proxy for /image.jpg (does not exists yet).
- Backend responds with 404.
- "proxy_intercept_errors on" kicks in and "error_page 404 = @no_image" is called.
- Nginx returns 403.
- Do another request for same image and see that "X-Cache: HIT" is set. We are clearly hitting proxy cache.
But, if we check /var/www/cache/ folder at this time we will see that there is no cache item created for this request. So does it mean Nginx keeps the cache for it in memory and forgot to write to file?
- Let's upload /image.jpg to backend.
- Now do "PURGE-CACHE: 1" request to that image. We see that now we get the image instead of 403 with "X-Cache: BYPASS" header present. Good.
If we check /var/www/cache/ we will see that the cache file is now finally created for this request. Looking inside the cached file we see that it is our image.
- Now here is the problem: lets request /image.jpg again with a normal GET request. We should get the newly uploaded image right?
- Nginx returns 403 with "X-Cache: HIT". Why? It seems like it's hitting the cache but returning something else not what is in /var/www/cache folder?? How?
My only explanation to this is it seems that Nginx is caching the response in memory and doesn't write to file when we are hitting error with our custom error_page in the proxied responses. Furthermore when using proxy_cache_bypass it does not overwrite in-memory cache, so that subsequent requests to the same item will be using old cache which is stored in memory and not the new one created in the cache folder.
Could someone please let me know if I am doing something wrong or this really is a bug. Spent last 3 days fighting this.
UPDATE: Backend returns normal set of headers you would would expect from S3 in both 200 and 404 responses:
404
Connection:close
Content-Type:application/xml
Date:Fri, 20 Nov 2015 07:41:39 GMT
Server:AmazonS3
Transfer-Encoding:chunked
x-amz-id-2:bH8L/1dOVGShsGJdZZ/zS/X6UkHS+KMAxDxnPvOkIalpPphFJXr9zZ1RiV6L2a13NXoZ3QdCOeE=
x-amz-request-id:D66FDBFAA9643252
200
Accept-Ranges:bytes
Connection:keep-alive
Content-Length:10533
Content-Type:image/jpeg
Date:Fri, 20 Nov 2015 07:47:12 GMT
ETag:"061b4dae0b2bbdf4a4fa212951f4ba79"
Last-Modified:Wed, 11 Nov 2015 14:29:09 GMT
Server:AmazonS3
x-amz-id-2:qsSmH/gkvql2jnj67p0vguZBXQJHfS+Yk70llBaDvbgH0xSCbvj9G9JlKn5WhWTdty0+JzApN7k=
x-amz-request-id:8CF04EA869190E63
I am changing servers of my website. The IP of old server cannot be moved to the new one. To have no downtime I am planing to do the following, please someone confirm it will work:
- Setup the new server and listen on the new IP
- Old server redirect all traffic to the new IP
- Change DNS records to point to the new IP
My logic tells me that when I redirect to the new IP from my old box, the user will not see the domain name in the browser but will see the new IP. Is there a way to redirect to the new IP and send along the HOSTNAME with it so that the user will see the domain name in the browser?
Im doing this because the site is in constant use and simply changing DNS settings won't do as database won't be synced between the new and old servers during propagation.
I am making a dataminer which needs to write to around 50 different files every 30 seconds. Each file is around 50kb. This process will run 24/7, 365 days a year. The dataminer is build on Node.js and also has a website (LAMP) running on the same VPS (Debian).
From my understanding this is not very good to be constantly writing to disk all the time.
Do I risk dramatically cutting the life of the disk? Having the whole system quite slow to respond? Or are the 50 files (50kb each) every 30 seconds is nothing to worry about at all?