On linux/bash I'd like to append stdout of a command to a file, but not redirect it (i.e. I want it to to go to both console and file). Any clues?
Joel's questions
I'm trying to find out where a port is being blocked by a firewall; either en-route to a host or by the host itself.
If I run nmap I can see that the port is filtered. However, this could mean by the host 192.168.1.74 or any firewall in between. Is there a way to find out exactly where?
joel@bohr ~ $ nmap -A 192.168.1.74 --traceroute
Starting Nmap 5.21 ( http://nmap.org ) at 2011-12-18 20:27 GMT
Warning: Traceroute does not support idle or connect scan, disabling...
Nmap scan report for android-63731d6ebec9e01.lan (192.168.1.74)
Host is up (0.040s latency).
Not shown: 999 closed ports
PORT STATE SERVICE VERSION
2222/tcp filtered unknown
Any recommendations for a free tool to profile memory/swap/cpu usage on Linux (Ubuntu), preferably with graphical timeline charts?
I'm thinking of something like ntop but for memory i.e. that provides a web interface into the data collected (as this will be running on a remote server).
I'm trying to fetch a page programatically and it takes exactly 10 seconds to resolve the host, every time. On another machine it takes exactly 30 seconds. Both Linux.
My code is in Java but the problem is reproducable using wget:
time wget -d --header "User-Agent:Mozilla/5.0 (X11; Linux i686) AppleWebKit/534.24 (KHTML, like Gecko) Ubuntu/10.10 Chromium/11.0.696.65 Chrome/11.0.696.65 Safari/534.24" http://www.sportsdirect.com
This hangs for 10 secs on:
Resolving www.sportsdirect.com... 86.17.5.250
We're running on Linux.
To confuse things further browsers on the same machine fetch the same page immediately.
Any clues?
Given a linux server (in my case Ubuntu) what's the easiest way to find out what make/model of harddisk and memory are being used?
i.e. what is the equivalent of /proc/cpuinfo for disk and memory?
Thanks. J
I've previously used a great script on a Linux servers to report on all sorts of security issues.
It generates a comprehensive list of potential security flaws on the machine including:
- out of date software / bugs
- open ports
- incorrect privileges
And it summarises everything in a report along with suggested fixes.
Problem is I cann't remember the name of the script or where to find it?
Any clues?! J
I have a debian server running Etch. I can no longer seem to run apt-get update properly:
root@charm osqa ] apt-get update
Ign http://ftp.uk.debian.org etch Release.gpg
Ign http://ftp.uk.debian.org etch Release
Err http://ftp.uk.debian.org etch/main Packages
404 Not Found
Err http://ftp.uk.debian.org etch/contrib Packages
404 Not Found
Err http://ftp.uk.debian.org etch/non-free Packages
404 Not Found
Failed to fetch http://ftp.uk.debian.org/debian/dists/etch/main/binary-i386/Packages.gz 404 Not Found
Failed to fetch http://ftp.uk.debian.org/debian/dists/etch/contrib/binary-i386/Packages.gz 404 Not Found
Failed to fetch http://ftp.uk.debian.org/debian/dists/etch/non-free/binary-i386/Packages.gz 404 Not Found
Reading package lists... Done
W: Couldn't stat source package list http://ftp.uk.debian.org etch/main Packages (/var/lib/apt/lists/ftp.uk.debian.org_debian_dists_etch_main_binary-i386_Packages) - stat (2 No such file or directory)
W: Couldn't stat source package list http://ftp.uk.debian.org etch/contrib Packages (/var/lib/apt/lists/ftp.uk.debian.org_debian_dists_etch_contrib_binary-i386_Packages) - stat (2 No such file or directory)
W: Couldn't stat source package list http://ftp.uk.debian.org etch/non-free Packages (/var/lib/apt/lists/ftp.uk.debian.org_debian_dists_etch_non-free_binary-i386_Packages) - stat (2 No such file or directory)
W: You may want to run apt-get update to correct these problems
E: Some index files failed to download, they have been ignored, or old ones used instead.
My sources.list file is pretty minimal, it just contains:
###### Debian Main Repos
deb http://ftp.uk.debian.org/debian/ etch main contrib
Any clues? Have the repo locations changed? Thanks, Joel.
I have a 64bit desktop, running 32bit Debian - with 2G memory.
user@box:~/$ head -n 1 /proc/meminfo MemTotal: 2030324 kB
But when I ask free to report on memory I see:
user@box:~$ free -g
total used free shared buffers cached
Mem: 1 1 0 0 0 1
-/+ buffers/cache: 0 1
Swap: 2 0 2
I am confused as to why free reports only 1G total memory when actually physical memory is 2G. Could someone explain how to correctly reconcile the output of free against my machine spec?
I've used HP DL machines at work. I found them to be blazingly fast, but very expensive - up to $15k,
I am curious though, although the spec we typically used (e.g. dual AMD Opteron 2.6GHz, 8 or 16GB RAM) was good - it was not so far of the 'headline' (shop window) specs of many a desktop machine that I have used. For example, I am now using a commodity machine which has 4G RAM, Dual Core Intel 2.8Hz, and costs ~ $400.
However, the DL was clearly much much faster. My reference is compiling a similar code base which on the DL might take a couple of seconds, and on the commodity hardware 10 seconds (assume the machines were doing nothing else, so minimal load factor & ram usage).
So my question is; given similar headline specs (RAM & CPU), what is it about the DL's build and architecture that makes it so much faster than a commodity machine.
Or phrased more simply, given a set CPU and RAM, what are the other server architecture & component features that significantly influence its performance?
Say I have the following mount points:
/dev/sda1 on /
/dev/sdb1 on /mnt/sdb1
sda is my primary hard disk drive. sdb is a second disk drive.
This might be a silly question, but does it make any difference to performance (copy, write ect..) if I work on sdb1 from a sim-linked path on sda?
So say I have the following sim link on sda:
/home/sim-to-sdb -> /mnt/sdb1
Does it make any difference to disk read/write performance if I work on
/home/sim-to-sdb
or
/mnt/sdb1
I have a very small and quite old hard drive disk, about 32G.
On to this disk I have copied a largish tar file, about 5G.
When I run md5sum to generate a checksum on this file I repeatedly get different results (on the same machine and the same file). This obviously should not happen.
If I repeat the experiment with a much smaller file, as expected the checksum is the same each time. I can only assume that because the large file is spanning most of the disk, and it is an old drive, I am experiencing a lot of read errors on the hard drive - and it needs replacing? Could there be any other good reason for this? Something I can do to fix the problem other than buying a new disk?
Update: sha1sum also produces inconsistent results.