I have a server which is running a few discrete Python, Java application which most of the time imports data into a PostGreSQL database. I would like to know from people out there who have experience tuning enterprise grade servers how do i go about calculating in a holistic way the amount of tuning needed for my server for example vm.swappiness, vm.overcommit_ratio and other numerical tunings needed for my server.
I tried to enable sar on my server to capture daily numbers but these are more along the lines of total numbers and I can't figure out how to allocate memory for my applications. Help would be appreciated.
Thanks.
First, determine the bottleneck in the system(slow PostgreSQL query, slow IO, slowly working scripts). Then use standard monitoring tools(top,htop,iostat,vmstat,pgtop,iotop) to determine a cause. Further, you find a solution(pgtune,kernel parameter,mount option,java vm option,rewrite source code).
You should install some kind of monitoring tool to get statistics. The best way to analyze the statistics is to get visual graphs of monitored parameters, I think. Cacti or Zabbix will provide you with these graphs. Most probably your system is I/O bound or memory bound, so you should start from monitoring your memory usage, swap file usage, I/O statistics and load average on a host. Cacti has a good set of templates to monitor I/O stats: http://www.markround.com/archives/48-Linux-iostat-monitoring-with-Cacti.html Also, it would be useful to get a list of slow SQL queries, you can do it by setting something like
log_min_duration_statement = 200
in postgresql.conf, where 200 is a desired number of milliseconds, you can use any other number as well.