Cron scripts are executed in "limited" environment, PATH and co are very restrained.
How can I emulate this env setting from interactive shell so that I could debug scripts that work great from user shell bug fail on the cron shell
I need to monitor my Ubuntu Linux server performance, prior to diving into nagios / zabbix type of "enterprise server monitoring" solutions I would prefer something more lightweight.
My requirements are simple:
The list of nice to have goes deep:
I've looked into ganglia, munin and they require Apache to be running the web front end.
-- Edit:
Effectively I would be happy for something that can collect and graph sysstat or dstat in rrd format and make it accessible as a web page
I am looking for a method / hack / kernel module to capture network traffic of a PID and all it's forks / child processes.
I have a firefox applications that opens some web pages and starts to stream stuff with flash streaming, wmv, or any other streaming protocol as well as "simple" download of img, js and other "static" content.
I'm interested in capturing this traffic and ultimately isolation these streams.
Wireshark does not support capturing by a process id, but I assume this can be worked around (and this is the core of my question). Obviously setting up a full virtual machine and running just firefox with wireshark in it will work but I be much more satisfied with a lightweight-er solution, perhaps based on chroot? combined with iptables owner module.
So ideas or complete solutions would be greatly appreciated.
-- EDIT:
People are rightfully guessing the OS I'm working on: The question is mainly pointed towards a Linux OS, but should a workable solution be found on Windows / OpenSolaris / MacOSX or any other reasonably hacker accessible OS that answer would be accepted.
This is the problem:
root@ip-10-126-247-82:~# mkfs.ext4 /dev/xvda3
mke2fs 1.41.14 (22-Dec-2010)
/dev/xvda3 is mounted; will not make a filesystem here!
And this is the debugging:
root@ip-10-126-247-82:~# mount
/dev/xvda1 on / type ext4 (rw)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type devtmpfs (rw,mode=0755)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)
Further more, device /dev/xvda2 the kernel will reformat and xvda1, xvda2, xvda3 are different devices
root@ip-10-126-247-82:~# ls -la /dev/xvda*
brw-rw---- 1 root disk 202, 1 2011-12-21 18:54 /dev/xvda1
brw-rw---- 1 root disk 202, 2 2011-12-22 10:33 /dev/xvda2
brw-rw---- 1 root disk 202, 3 2011-12-21 18:54 /dev/xvda3
root@ip-10-126-247-82:~# cat /proc/partitions
major minor #blocks name
202 1 10485760 xvda1
202 2 356485632 xvda2
202 3 917504 xvda3
It won't format xvda1 (correct)
root@ip-10-126-247-82:~# mkfs.ext4 /dev/xvda1
mke2fs 1.41.14 (22-Dec-2010)
/dev/xvda1 is mounted; will not make a filesystem here!
It will format xvda2 (correct)
root@ip-10-126-247-82:~# mkfs.ext4 /dev/xvda2
mke2fs 1.41.14 (22-Dec-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
22282240 inodes, 89121408 blocks
4456070 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
2720 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 36 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
It won't format xvda3 (incorrect)
root@ip-10-126-247-82:~# mkfs.ext4 /dev/xvda3
mke2fs 1.41.14 (22-Dec-2010)
/dev/xvda3 is mounted; will not make a filesystem here!
-- EDIT:
Adding lsof debug as @Janne Pikkarainen suggests:
root@ip-10-126-247-82:~# lsof -n | grep '202,3'
root@ip-10-126-247-82:~# lsof -n | grep 'xvda3'
root@ip-10-126-247-82:~#
My client machine is Ubuntu 11.04, my server machine is Ubuntu 10.10. I'm trying to achieve the simplest quick and dirty solution possible to get all my client machine's traffic to be redirected to the server machine and from there to the internet.
For the I'm trying to follow this guide http://openvpn.net/index.php/open-source/documentation/miscellaneous/78-static-key-mini-howto.html
Being new the openvpn, I've looked at the logs but I think that the client does not even attempt to contact the server to open the connection. Am I missing some configuration option or should I not be starting the client the same method I'm starting the server daemon?
On the server I have configured the following:
root@domU-12-31-39-16-42-4D:/etc/openvpn# cat /etc/openvpn/server.conf
dev tun
ifconfig 10.8.0.1 10.8.0.2
secret /etc/openvpn/static.key
push "redirect-gateway def1 bypass-dhcp"
proto udp
comp-lzo
status /var/log/openvpn-status.log
log-append /var/log/openvpn.log
keepalive 10 120
persist-key
persist-tun
ping-timer-rem
verb 7
On the client machine I have configured the following:
root@maxim-desktop:/etc/openvpn# cat /etc/openvpn/client.conf
dev tun
ifconfig 10.8.0.1 10.8.0.2
secret /etc/openvpn/static.key
proto udp
comp-lzo
persist-key
persist-tun
keepalive 10 120
persist-key
persist-tun
ping-timer-rem
status /var/log/openvpn-status.log
log-append /var/log/openvpn.log
remote ec2-50-17-124-16.compute-1.amazonaws.com 1194
resolv-retry infinite
verb 7
I'm basically following this guide http://openvpn.net/index.php/open-source/documentation/miscellaneous/78-static-key-mini-howto.html and still, when I open the vpn connection on the client side I don't get all my traffic to be redirected through the vpn server.
I need a monitoring system, much like ganglia / nagios that is build for the cloud.
I need it to support :
More features include: External API, web interface and co.
I've looked at Ganglia, munin and they both seem be almost there (but not exactly). I would also go for reasonably priced Software as Service solution.
I'm currently doing research, so Suggestions are highly appreciated.
Thank you,
Maxim
I'm looking to tweak ubuntu cloud version default setup where is denies root login.
Attempting to connect to such machine yields:
maxim@maxim-desktop:~/workspace/integration/deployengine$ ssh [email protected]
The authenticity of host 'ec2-204-236-252-95.compute-1.amazonaws.com (204.236.252.95)' can't be established.
RSA key fingerprint is 3f:96:f4:b3:b9:4b:4f:21:5f:00:38:2a:bb:41:19:1a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ec2-204-236-252-95.compute-1.amazonaws.com' (RSA) to the list of known hosts.
Please login as the ubuntu user rather than root user.
Connection to ec2-204-236-252-95.compute-1.amazonaws.com closed.
I would like to know in what configuration file the root blocking via ssh is configured and how I can change the printed message?
I would like to test fake load on the server, I'm looking for some burn-in or benchmark command line utility that would generate CPU load on the system.
I would like to be able to burn-in only CPU (no harddisk load, network and co) and that I would be able to set the period in which the load will run. Meaning I want something that would be able to run: CPU load for 10min on the system.
Any ideas?
Try executing the following under a bash shell echo "Reboot your instance!"
On my installation:
root@domU-12-31-39-04-11-83:/usr/local/bin# bash --version
GNU bash, version 4.1.5(1)-release (i686-pc-linux-gnu)
Copyright (C) 2009 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
root@domU-12-31-39-04-11-83:/usr/local/bin# uname -a
Linux domU-12-31-39-04-11-83 2.6.35-22-virtual #35-Ubuntu SMP Sat Oct 16 23:57:40 UTC 2010 i686 GNU/Linux
root@domU-12-31-39-04-11-83:/usr/local/bin# echo "Reboot your instance!"
-bash: !": event not found
Can anyone please explain what is "bash events?" I've never heard this concept before. Also, how should I output "!" at the end of the sentence?
I have an application that I would like to start in a "visualized environment". I don't want this application to be able to write/read any files on my local file system. A bonus would be to be able to monitor everything this application does.
The application is graphical if that matters.
Can I do this with existing linux tools? Can I emulate this behaviour with chroot?
I don't want to run a fully virtualbox just for 1 application, this seems like an over kill.
Thank you, Maxim.
I've purchased a domain as part of google apps "standard" edition signup.
This is how the zone record was configured on godaddy:
; SOA Record
VEKSLERS.ORG. 3600 IN SOA ns33.domaincontrol.com. dns.jomax.net. (
2010091700
28800
7200
604800
86400
)
; A Records
@ 3600 IN A 216.239.32.21
@ 3600 IN A 216.239.34.21
@ 3600 IN A 216.239.36.21
@ 3600 IN A 216.239.38.21
; CNAME Records
www 3600 IN CNAME ghs.google.com
calendar 3600 IN CNAME ghs.google.com
mail 3600 IN CNAME ghs.google.com
start 3600 IN CNAME ghs.google.com
docs 3600 IN CNAME ghs.google.com
; MX Records
@ 3600 IN MX 10 aspmx.l.google.com
@ 3600 IN MX 20 alt1.aspmx.l.google.com
@ 3600 IN MX 20 alt2.aspmx.l.google.com
@ 3600 IN MX 30 aspmx2.googlemail.com
@ 3600 IN MX 30 aspmx3.googlemail.com
@ 3600 IN MX 30 aspmx4.googlemail.com
@ 3600 IN MX 30 aspmx5.googlemail.com
; SRV Records
_xmpp-server._tcp.@ 3600 IN SRV 5 0 5269 xmpp-server.l.google.com
_xmpp-server._tcp.@ 3600 IN SRV 20 0 5269 xmpp-server1.l.google.com
_xmpp-server._tcp.@ 3600 IN SRV 20 0 5269 xmpp-server2.l.google.com
_xmpp-server._tcp.@ 3600 IN SRV 20 0 5269 xmpp-server3.l.google.com
_xmpp-server._tcp.@ 3600 IN SRV 20 0 5269 xmpp-server4.l.google.com
_jabber._tcp.@ 3600 IN SRV 5 0 5269 xmpp-server.l.google.com
_jabber._tcp.@ 3600 IN SRV 20 0 5269 xmpp-server1.l.google.com
_jabber._tcp.@ 3600 IN SRV 20 0 5269 xmpp-server2.l.google.com
_jabber._tcp.@ 3600 IN SRV 20 0 5269 xmpp-server3.l.google.com
_jabber._tcp.@ 3600 IN SRV 20 0 5269 xmpp-server4.l.google.com
; NS Records
@ 3600 IN NS ns33.domaincontrol.com
@ 3600 IN NS ns34.domaincontrol.com
Now, I know that google app engine does not support "naked domains" and indeed a forwarding is configured from vekslers.org to www.vekslers.org.
What I don't understand is how this is being setup? Assuming that "@" in A records means "root" (?), these configured IP's lead to Google servers, does the AppEngine team have a default redirect from foo.com to www.foo.com is foo.com is a registered Google Apps domain?
Clarifications would be highly appreciated, thanks.
I'm wondering what software the web scale guys are using to monitor their n arrays of servers in the server farm(s).
What does facebook, twitter, digg use? How google does it?
I'm looking for a solution to our own monitoring requirements. Our servers sit in the cloud, AppEngine & EC2. We are looking to monitor the "application" (which is build from many small services) meaning that the end result should be a system that can monitor both response time (+alivenss and co.) and application validness: If I do X then Y should happen, then after 2 hours verify the Z was processed and T was appended to the correct log...
The ideal solution would be a system that I can deploy unit tests to, the same unit tests I'm using to test the software while developing.
Recommendations, pointers, comments are highly welcome - I'm looking for directions to attack this issue.
Thanks, Maxim.
One of the good (few) features windows has is it's RDP protocol implementation. This wonder allows me to work with my 2 screen setup in the office, then drive home, open a VPN connection followed by RDP connection to the office PC from home and get my environment exactly as I left it (except from the screen resolution which adapted to my home PC screen hardware).
The above works, and it works great - On Windows. I want the same feature on Linux. I want to be able to open a Gnome / KDE / (other windows manager supports this feature) at the office computer then connect from home and have the displays exported to my current screen.
I've tried several possible work around like having a VNC session constantly open and connecting to it both from work and from home - This works but is no fun (you lose the responsiveness of the "native" application, access to local storage and co.).
Could you suggest a solution? Perhaps some Xorg plugin ?
Thank you for reading, Maxim.
Server applications running on linux often require large quantities of open file handlers, for ex. HBase ulimit, Hadoop epoll limit
This wiki entry should serve as documentary for Linux file limits configuration.
Please describe the Linux distribution under which your configuration is valid as various vendors configure stuff differently.
Update Based on lstvan answer:
For people looking to automate this, at least on Ubuntu servers you can put this in your machine installation scripts:
echo 'fs.file-max = 65000' > /etc/sysctl.d/60-file-max.conf
echo '* soft nofile 65000' > /etc/security/limits.d/60-nofile-limit.conf
echo '* hard nofile 65000' >> /etc/security/limits.d/60-nofile-limit.conf
echo 'root soft nofile 65000' >> /etc/security/limits.d/60-nofile-limit.conf
echo 'root hard nofile 65000' >> /etc/security/limits.d/60-nofile-limit.conf
Trying to configure my Ubuntu server to sync with pool.ntp.org. Following this guide https://help.ubuntu.com/community/UbuntuTime.
I've configured my ntp.conf as following
cat /etc/ntp.conf
# /etc/ntp.conf, configuration for ntpd; see ntp.conf(5) for help
driftfile /var/lib/ntp/ntp.drift
# Enable this if you want statistics to be logged.
statsdir /var/log/ntpstats/
statistics loopstats peerstats clockstats
filegen loopstats file loopstats type day enable
filegen peerstats file peerstats type day enable
filegen clockstats file clockstats type day enable
# You do need to talk to an NTP server or two (or three).
server 0.north-america.pool.ntp.org
server 1.north-america.pool.ntp.org
server 2.north-america.pool.ntp.org
server 3.north-america.pool.ntp.org
# Access control configuration; see /usr/share/doc/ntp-doc/html/accopt.html for
# details. The web page <http://support.ntp.org/bin/view/Support/AccessRestrictions>
# might also be helpful.
#
# Note that "restrict" applies to both servers and clients, so a configuration
# that might be intended to block requests from certain clients could also end
# up blocking replies from your own upstream servers.
# By default, exchange time with everybody, but don't allow configuration.
restrict -4 default kod notrap nomodify nopeer noquery
restrict -6 default kod notrap nomodify nopeer noquery
# Local users may interrogate the ntp server more closely.
restrict 127.0.0.1
restrict ::1
# Clients from this (example!) subnet have unlimited access, but only if
# cryptographically authenticated.
#restrict 192.168.123.0 mask 255.255.255.0 notrust
# If you want to provide time to your local subnet, change the next line.
# (Again, the address is an example only.)
#broadcast 192.168.123.255
# If you want to listen to time broadcasts on your local subnet, de-comment the
# next lines. Please do this only if you trust everybody on the network!
#disable auth
#broadcastclient
Then I updated the system to far far away
date -s "2 OCT 2006 18:00:00"
And tried to restart ntpd and the time is still 2006.
ntpq --peers; date
remote refid st t when poll reach delay offset jitter
==============================================================================
dev1-c.sje007.i 209.81.9.7 2 b 48 64 177 80.623 1205991 1100914
ox.eicat.ca 139.78.135.14 2 b 18 64 377 24.743 1205991 1019249
ntp1.Housing.Be 169.229.128.214 3 b 62 64 177 94.714 -5.160 6962796
ns1.your-site.c 10.1.5.2 3 b 26 64 177 10.913 -9.521 6962796
Mon Oct 2 18:02:29 UTC 2006
Why doesn't ntp behave?
I want my apache to return 404 for all HTTP GET requests. Including HTTP GET /.
Tried to play a bit with mod_rewrite for this (404.gif obviously does not exits)
RewriteEngine on
RewriteRule .* 404.gif [L]
But it doesn't seem to be acting very nice, this for some reason returns 400 Bad Request.
Could someone please provide a configuration example for apache 2 that will cause it to always return the wonderful 404 ?
p.s. Forgot to mention, I will be using this configuration for both HTTP & HTTPS.
Thank you, Maxim.
I'm wondering who is doing DNS geo location for google.com? By request geo location I mean detecting the location of the user, calculating the nearest server farm to his location, and then routing the request to selected location.
We are evaluating several offers from different vendors for this service and I thought I should know what the big boys are using.
I would like to open a discussion that would accumulate your Linux command line (CLI) best practices and tips.
I've searched for such a discussion to share the below comment but haven't found one, hence this post.
I hope we all could learn from this.
You are welcome to share your Bash tips, grep, sed, AWK, /proc and all other related Linux/Unix system administration, shell programming best practices for the benefit of us all.