I understand SQL Server doesn't release the memory unless the OS needs it. Then, monitoring Available Bytes (free memory) is not the best way to monitor the service. What other variables can give me a real measure about the behaviour of the SQL Server? Maybe Pages/sec or Page Faults/sec? I'm using nagios to monitor the service and sometimes the alerts arise because a big query is executed.
hdanniel's questions
Yesterday my web sites were down for a short time. I logged on my server and my first reaction was restart the apache web server. After that everything was working fine. So I start to check the ganglia metrics to see what happened. It was clear that one minute before I restarted apache, the number of requests to the web server was very high, surpassing the limits of Apache and blocking other requests.
I checked manually the apache logs, filtering the traffic minutes before and after the restart. There were no signs of something wrong. I also analyze the logs with some tools (awstats, bots script, etc) with similar results. I do the same with the error logs, checking carefully for some strange behaviour. No success.
So I'm pretty sure the problem was a sudden increment of the requests to the apache web server. But I don't know how this happened, if this was an attack, some nasty bug, a problem in the application, or something else I don't know. What will you do if something similar happens in your web server? What other tools you use? What other logs you check? Was it wrong restart the web server as the first measure to solve the problem?
I have a 3-layer web solution like this:
- Frontend with load balancing + proxies + static content
- Backend with 2 Apache web servers each one serving different sites
- Publishing System that pushes content to the apache web servers
So I am working in a solution with High Availability for the web servers in the backend. My idea is to replicate the content between the backend servers and if one fails the other will serve all the sites (this could be manual or using Heartbeat).
Problem is that the sites are big in terms of total size and number of files. I try to replicate the contents between servers with rsync but it takes a long time. Also I thought of using NFS to share the contents but this is not an option for High Availability. Another way is for the publishing system to push content to both web servers, but what will happen if I put another web server in the backend?
Is there a better way to do this? I don't need both servers serving the same content at the same time, but having the same content synchronized is a must.
In Amazon EC2 I have a setup of proxies and 1 monitor (MON). I've installed gmond in the proxies and gmetad in MON. My data source for gmetad.conf in MON look like this:
data_source "proxies" proxy1:8654 proxy2:8654 proxy3:8654
In the proxies's gmond.conf I have:
tcp_accept_channel {
port = 8654
}
Everything is working fine, when I telnet from MON to the proxies I get the XML with the right data. The problem is that the web frontend only shows one source from cluster "proxies", indeed it shows the first source I put on the list, in this case proxy1. If I change the order:
data_source "proxies" proxy2:8654 proxy3:8654 proxy1:8654
It only shows data from proxy2.
I've installed other monitoring systems in Ganglia using TCP or UDP, even through ssh tunnels but is the first time I see this behaviour. I'm not using multicast because (as far as I know) Amazon doesn't support on their network. Why is Ganglia-Web only showing one data source?
I'm testing nginx with different configurations to replace an architecture working with squid + apache. I know that I can use nginx to manage static requests and load balancing but I'm interested in one particular solution that I don't understand clearly:
I'm using 2 nginx servers (balanced) with the proxy_pass setting to pass all requests to an apache server. When one client makes a request to the site one of the nginx servers process it and send it to the apache server. Now, how this behaviour could be an improvement to my system?, it seems that all requests are passing through apache and I don't see benefit at all. What happens when 100 simultaneous connections pass through nginx? The 100 connections will be going to the apache server or is some kind of internal behaviour that allows an small impact into apache?
I'm starting to test EC2 for a couple of new projects. I need to choose an AMI (Amazon Machine Image) and Amazon offered me as first option Fedora Core 8, which is a very old version of one of my favorites distributions. There is a lot of choices, but it's not clear for me which one is the better option. I have my own reasons in order to choice a distro and a version when I need to install a new server but I don't know If I can apply the same for EC2. I know there is a beta for RHEL, how stable is this beta?, How can I choose between all the CentOS AMIs in the list?
So this is my question: Do you recommend an AMI to start with EC2?
Thanks
I was searching for a tool to capture http packets sent from a linux server to an external server. Normally I use iftop or iptraf with filters to see real time information and tcpdump to get verbose information. But what I need right now is some way to capture all the url requests to some log file. I know that I can configure a proxy to log all this information but this is impossible because our actual architecture. Do you know some tool that can get this information? Or do I need to make a script to process the information from tcpdump?