The default limit of how long a filename can be on EXT3 is 255 characters. I have a unique requirement where I need much longer filenames (apparently because mod_rewrite of apache uses filenames for storing). Is there any setting I can tweak to increase this limit from 255 characters? (Or if I can change mod_rewrite setting not to use files)
Paras Chopra's questions
I have got a unique problem with one of my servers. The disk I/O statistic is consistently increasing for last couple of weeks. See this graph from Munin:
From Linode's dashboard, I see a more fine-grained picture of disk I/O. Here is the cyclical / rhythmic graph (a day's interval). But do note that even though it appears cyclical, over a period of weeks, average disk I/O is increasing consistently (see above graph):
Now, I did iotop
and saw that kjournald
is the only process doing writing for disk I/O (apart from the occassional rsyslogd
-- but the frequency of disk I/O of kjournald
is much, much higher). In the graphs above, the read component of I/O is practically zero.
Why is kjournald
writing even when there is no other process writing? Why is size of writes getting larger by the day?
Another clue: free memory is also monotonically decreasing while "buffers" is increasing. See this graph:
PS: the server is Apache only. Access logs are disabled, but error logs are enabled. Serving about 80 requests/second. We use Redis as queue. My disk is using ext3.
I am using Nginx as a proxy to 4 apache instances. My problem is that SSL negotiation takes a lot of time (600 ms). See this as an example: http://www.webpagetest.org/result/101020_8JXS/1/details/
Here is my Nginx Conf:
user www-data;
worker_processes 4;
events {
worker_connections 2048;
use epoll;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
sendfile on;
keepalive_timeout 0;
tcp_nodelay on;
gzip on;
gzip_proxied any;
server_names_hash_bucket_size 128;
}
upstream abc {
server 1.1.1.1 weight=1;
server 1.1.1.2 weight=1;
server 1.1.1.3 weight=1;
}
server {
listen 443;
server_name blah;
keepalive_timeout 5;
ssl on;
ssl_certificate /blah.crt;
ssl_certificate_key /blah.key;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
ssl_protocols SSLv2 SSLv3 TLSv1;
ssl_ciphers RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
ssl_prefer_server_ciphers on;
location / { proxy_pass http://abc;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
The machine is a VPS on Linode with 1 G of RAM. Can anyone please tell why SSL Hand shake is taking ages?
I've hired a remote consultant to tune up my servers. Now, I am not 100% confident about giving him root password and allow him to do whatever he wants to do on the servers. Ideally, I want to see everything he is doing to my servers (in realtime) and also find a way not to share root password with him.
Are there any best practices in allowing a remote-consultant to access your server?
EDIT: To clarify, I want to do some kind of screen share with the consultant. Is there any method by which his commands are tunneled through my account without he getting any password ever?
PS: My servers are on Ubuntu 9.10
I own and operate visualwebsiteoptimizer.com/. The app provides a code snippet which my customers insert in their websites to track certain metrics. Since the code snippet is external JavaScript (at the top of site code), before showing a customer website, a visitor's browser contacts our app server. In case our app server goes down, the browser will keep trying to establish the connection before it times out (typically 60 seconds). As you can imagine, we cannot afford to have our app server down in any scenario because it will negatively affect experience of not just our website visitors but our customers' website visitors too!
We are currently using DNS failover mechanism with one backup server located in a different data center (actually different continent). That is, we monitor our app server from 3 separate locations and as soon as it is detected to be down, we change A record to point to the back up server IP. This works fine for most browsers (as our TTL is 2 minutes) but IE caches the DNS for 30 minutes which might be a deal killer. See this recent post of ours visualwebsiteoptimizer.com/split-testing-blog/maximum-theoretical-downtime-for-a-website-30-minutes/
So, what kind of setup can we use to ensure an almost instant failover in case app data center suffers major outage? I read here www.tenereillo.com/GSLBPageOfShame.htm that having multiple A records is a solution but we can't afford session synchronization (yet). Another strategy that we are exploring is having two A records, one pointing to app server and second to a reverse proxy (located in a different data center) which resolves to main app server if it is up and to backup server if it is up. Do you think this strategy is reasonable?
Just to be sure of our priorities, we can afford to keep our own website or app down but we can't let customers' website slow down because of our downtime. So, in case our app servers are down we don't intend to respond with the default application response. Even a blank response will suffice, we just need that browser completes that HTTP connection (and nothing else).
Reference: I read this thread which was useful serverfault.com/questions/69870/multiple-data-centers-and-http-traffic-dns-round-robin-is-the-only-way-to-assure