What is the rule of thumb in performance characteristics and differences between 10k / 15k revolutions per minute (RPM) Serial Attached Storage (SAS) hard disc drives (HDDs), versus Serial ATA (SATA) 6Gbps solid state drives (SSDs) of the same generation?
cnst's questions
For monitoring purposes, I'd like to find out all public IPv4 and IPv6 addresses of a mobile-warrior UNIX box.
Note that this is different from Finding the Public IP address in a shell script because of the following extra requirements:
- the mobile warrior itself probably does not have any public IPv4 addresses at all;
- it may or may not have IPv6 (but we're only interested in active ones that would be used in actual outgoing connections);
- the underlying internet connection might be load-balanced, details unknown, where a combination of UDP, TCP, ICMP and source/destination IP address, may determine which upstream will be used; we need to try our best to find out all such IP addresses for a complete picture on the underlying internet connectivity of the gateway, without having direct access to the gateway itself.
I've noticed that I'm not getting certain emails in my Gmail and Yandex.Mail that are forwarded via UNIX systems (without SRS — not too sure if Sender Rewriting Scheme is still the best practice? Because with DMARC I think it'll also have to apply to the actual From:
header within the email itself.) from DMARC-enabled senders.
I can't quite figure out what's going on — emails that always go through include PayPal.com, whereas Microsoft.com and some others get rejected (only getting delivered locally to systems that don't implement DMARC on the receiving side).
% echo _dmarc.{microsoft.com,paypal.com} | xargs -n1 dig -t txt | fgrep v=D
_dmarc.microsoft.com. 3302 IN TXT "v=DMARC1\; p=reject\; pct=100\; rua=mailto:[email protected]\; ruf=mailto:[email protected]\; fo=1"
_dmarc.paypal.com. 3311 IN TXT "v=DMARC1\; p=reject\; rua=mailto:[email protected]\; ruf=mailto:[email protected]"
%
When both domains have the same reject
policy — and Google even specifically mentions that PayPal does have a definitive reject policy — I'm not exactly sure if there's something wrong in my own setup, or if it's the sending party that's to blame. Any ideas?
Is it just because of SPF's fail vs. softfail, or is there more to it?
% echo {microsoft.com,paypal.com} | xargs -n1 dig -t txt | fgrep v= | sed 's#[^[:space:]]*:[^[:space:]]*#:#g'
microsoft.com. 3332 IN TXT "v=spf1 : : : : : : : : : : -all"
paypal.com. 300 IN TXT "v=spf1 : : : : : : ~all"
%
Here's what Gmail reports for PayPal emails that do get delivered through forwarding:
ARC-Authentication-Results: i=1; mx.google.com;
dkim=pass [email protected] header.s=pp-epsilon1 header.b=K96c6GUZ;
spf=fail (google.com: domain of [email protected] does not designate 2001:470:7240:: as permitted sender) [email protected];
dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=paypal.com
Return-Path: <[email protected]>
I've tried doing a couple of traceroute tests to the DNS servers of Rage4, which are anycasted from many parts of the world (supposedly, from 30 different locations).
What I've experienced is that very often the traceroute shows that the server being contacted is very far away from the querying location, even though a traceroute from another location would show a server that should have been quite close.
For example, from France, we're going to an Indian server:
traceroute to ns2.r4ns.com (176.124.113.200), 16 hops max, 60 byte packets 1 88-190-61-1.poneytelecom.eu (88.190.61.1) [AS12322] 2.695 ms 2.876 ms 3.000 ms 2 a9k2-1041.dc3.poneytelecom.eu (88.191.1.185) [AS12322] 1.178 ms 1.447 ms 1.516 ms 3 ix-25-0.thar1.PVU-Paris.as6453.net (80.231.63.21) [AS6453] 0.523 ms 0.522 ms 0.516 ms 4 if-13-5.tcore1.PVU-Paris.as6453.net (80.231.153.177) [AS6453] 11.326 ms 11.310 ms 11.313 ms 5 if-12-2.tcore1.PYE-Paris.as6453.net (80.231.154.69) [AS6453] 11.306 ms 11.555 ms 11.290 ms 6 if-8-1600.tcore1.WYN-Marseille.as6453.net (80.231.217.5) [AS6453] 11.405 ms 11.362 ms if-14-2.tcore1.WYN-Marseille.as6453.net (80.231.154.170) [AS6453] 11.350 ms 7 if-2-2.tcore2.WYN-Marseille.as6453.net (80.231.217.2) [AS6453] 11.326 ms 11.428 ms 11.507 ms 8 80.231.200.26 (80.231.200.26) [AS6453] 105.722 ms 105.972 ms 107.541 ms 9 * * * 10 * * * 11 * * * 12 14.140.128.98.static-vsnl.net.in (14.140.128.98) [AS4755] 115.191 ms 115.046 ms 115.022 ms 13 ns2.r4ns.com (176.124.113.200) [as198412/AS23033/AS28513/AS61163/AS198412] 116.479 ms 114.806 ms 114.982 ms
Or from Germany, it goes Los Angeles, California:
traceroute to ns1.r4ns.com (176.124.112.100), 16 hops max, 40 byte packets 1 static.33.203.4.46.clients.your-server.de (46.4.203.33) [AS24940] 0.650 ms 3.633 ms 0.643 ms 2 hos-tr4.juniper2.rz13.hetzner.de (213.239.224.97) [AS24940] 0.246 ms hos-tr3.juniper2.rz13.hetzner.de (213.239.224.65) [AS24940] 0.234 ms hos-tr2.juniper1.rz13.hetzner.de (213.239.224.33) [AS24940] 0.236 ms 3 core21.hetzner.de (213.239.245.81) [AS24940] 1.203 ms core22.hetzner.de (213.239.245.121) [AS24940] 0.233 ms 0.236 ms 4 core22.hetzner.de (213.239.245.162) [AS24940] 0.245 ms 0.236 ms core1.hetzner.de (213.239.245.177) [AS24940] 4.808 ms 5 juniper1.ffm.hetzner.de (213.239.245.5) [AS24940] 4.837 ms core1.hetzner.de (213.239.245.177) [AS24940] 4.843 ms 4.825 ms 6 dec-ix-c01.wvfiber.net (80.81.192.220) 6.174 ms 6.230 ms juniper1.ffm.hetzner.de (213.239.245.5) [AS24940] 4.923 ms 7 dec-ix-c01.wvfiber.net (80.81.192.220) 6.223 ms ams-ten1-4-fra-ten2-1.bboi.net (66.216.50.57) [AS19151, AS19151] 29.457 ms 24.788 ms 8 ams-ten1-4-fra-ten2-1.bboi.net (66.216.50.57) [AS19151, AS19151] 22.413 ms 19.756 ms ny60-vl2-ams-vl2.bboi.net (66.216.48.217) [AS19151] 109.138 ms 9 66.216.1.150 (66.216.1.150) [AS19151] 127.527 ms ny60-vl2-ams-vl2.bboi.net (66.216.48.217) [AS19151] 109.68 ms 108.802 ms 10 lv-ten1-3-chi-ten1-6.bboi.net (64.127.128.130) [AS19151] 171.983 ms 171.777 ms * 11 lv-ten1-3-chi-ten1-6.bboi.net (64.127.128.130) [AS19151] 171.794 ms la-ten1-3-lv-ten1-3.bboi.net (66.186.192.21) [AS19151] 178.527 ms 177.967 ms 12 la-ten1-3-lv-ten1-3.bboi.net (66.186.192.21) [AS19151] 178.478 ms 178.82 ms 66.186.197.174 (66.186.197.174) [AS19151] 178.281 ms 13 colo-lax6 (96.44.180.102) [AS29761] 178.17 ms 66.186.197.174 (66.186.197.174) [AS19151] 177.716 ms 177.935 ms 14 lax-qn-gw.as36236.net (72.11.150.122) [AS29761] 177.709 ms 177.764 ms colo-lax6 (96.44.180.102) [AS29761] 184.72 ms 15 lax-qn-gw.as36236.net (72.11.150.122) [AS29761] 178.34 ms 208.111.40.5 (208.111.40.5) [AS36236] 175.688 ms 175.351 ms 16 208.111.40.5 (208.111.40.5) [AS36236] 175.641 ms 175.919 ms ns1.r4ns.com (176.124.112.100) [AS61163 198412] 176.590 ms
Whereas from San Jose, California, it goes to Romania:
traceroute to ns1.r4ns.com (176.124.112.100), 16 hops max, 60 byte packets 1 23.92.24.3 (23.92.24.3) [*] 0.574 ms 0.704 ms 0.846 ms 2 23.92.24.2 (23.92.24.2) [*] 0.464 ms 0.589 ms 0.749 ms 3 10gigabitethernet7-6.core3.fmt2.he.net (65.49.10.217) [AS6939] 0.256 ms 0.263 ms 0.245 ms 4 10gigabitethernet4-3.core1.dal1.he.net (72.52.92.154) [AS6939] 45.442 ms 45.436 ms 45.483 ms 5 10gigabitethernet5-4.core1.atl1.he.net (184.105.213.114) [AS6939] 65.805 ms 65.797 ms 65.757 ms 6 10gigabitethernet16-5.core1.ash1.he.net (184.105.213.109) [AS6939] 74.565 ms 74.552 ms 74.537 ms 7 10gigabitethernet9-2.core1.par2.he.net (184.105.213.94) [AS6939] 163.530 ms 163.534 ms 163.515 ms 8 10gigabitethernet15-1.core1.fra1.he.net (72.52.92.25) [AS6939] 161.999 ms 163.440 ms 161.952 ms 9 * * 10Gbps.de-cix.adnettelecom.ro (80.81.194.100) [AS6695] 225.819 ms 10 * * * 11 cr.adnettelecom.ro (77.232.218.33) [AS5541] 193.016 ms 195.493 ms 250.588 ms 12 10gbps.cr5.adnettelecom.ro (77.232.218.102) [AS5541] 249.048 ms 191.119 ms 192.532 ms 13 ns1.r4ns.com (176.124.112.100) [as198412/AS28513/AS33597/AS61163/AS198412] 191.507 ms 246.698 ms 192.140 ms
Why is the routing so suboptimal? Is it because they don't run their own backbone? Or what is the explanation for such poor routing?
There's a rumour that public domain name resolvers, like Google Public DNS, are still supposed to work with GeoDNS, because there's some field in the requests that lets them specify for which IP address they are doing a resolution, thus the authoritative servers can give a given resolver different resolutions for different final clients.
What's this whole thing called as far as RFCs go, and how does one mimic such resolutions, for testing purposes, e.g. with dig(1)? Else, what other tool is available to accomplish said task?
On an HP DL120G7 with HP P410, per some suggestion on a blog, I've installed the latest version of hpacucli
from http://downloads.linux.hp.com/SDR/downloads/ProLiantSupportPack/Debian/pool/non-free/ — hpacucli_8.70-8.0.2-2_amd64.deb — but it doesn't seem to recognise my controller. Why?
wget http://downloads.linux.hp.com/SDR/downloads/ProLiantSupportPack/Debian/pool/non-free/hpacucli_8.70-8.0.2-2_amd64.deb
dpkg -i hpacucli_8.70-8.0.2-2_amd64.deb
apt-get install lib32gcc1 lib32stdc++6 libc6-i386
dpkg -i hpacucli_8.70-8.0.2-2_amd64.deb
…
root@sd-49XXX:~# hpacucli ctrl all show config
Error: No controllers detected.
root@sd-49XXX:~# lsscsi
[0:0:0:0] storage HP P410 5.14 -
[0:0:0:1] disk HP LOGICAL VOLUME 5.14 /dev/sda
root@sd-49XXX:~#
Is the latest version from the official HP web-site not actually the latest?
If one happens to have some server-grade hardware at ones disposal, is it ever advisable to run ZFS on top of a hardware-based RAID1 or some such? Should one turn off the hardware-based RAID, and run ZFS on a mirror
or a raidz
zpool
instead?
With the hardware RAID functionality turned off, are hardware-RAID-based SATA2 and SAS controllers more or less likely to hide read and write errors than non-hardware-RAID controllers would?
In terms of non-customisable servers, if one has a situation where a hardware RAID controller is effectively cost-neutral (or even lowers the cost of the pre-built server offering, since its presence improves the likelihood of the hosting company providing complementary IPMI access), should it at all be avoided? But should it be sought after?
It is written everywhere that ZFS is helpful even if you only have one physical device, because it will tell you about data corruption due to bit decay and such.
However, can it actually address such corruption?
In other words, are there any notable benefits in running ZFS as a filesystem on a single physical device?
It would seem like with 3 devices, it's possible to configure a ZFS pool with either mirror
or raidz2
mode.
What's the difference in performance and reliability?
(In regards to reliability, I'm specifically interested in the topic of partial data loss.)
I have a box with Gigabit Ethernet, and I'm unable to get past about 200Mbps or 250Mbps in my download tests.
I do the tests like this:
% wget -6 -O /dev/null --progress=dot:mega http://proof.ovh.ca/files/1Gb.dat
--2013-07-25 12:32:08-- http://proof.ovh.ca/files/1Gb.dat
Resolving proof.ovh.ca (proof.ovh.ca)... 2607:5300:60:273a::1
Connecting to proof.ovh.ca (proof.ovh.ca)|2607:5300:60:273a::1|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 125000000 (119M) [application/x-ns-proxy-autoconfig]
Saving to: ‘/dev/null’
0K ........ ........ ........ ........ ........ ........ 2% 5.63M 21s
3072K ........ ........ ........ ........ ........ ........ 5% 13.4M 14s
6144K ........ ........ ........ ........ ........ ........ 7% 15.8M 12s
9216K ........ ........ ........ ........ ........ ........ 10% 19.7M 10s
12288K ........ ........ ........ ........ ........ ........ 12% 18.1M 9s
15360K ........ ........ ........ ........ ........ ........ 15% 19.4M 8s
18432K ........ ........ ........ ........ ........ ........ 17% 20.1M 7s
With the constraint that I only control one server which I want to test, and not the sites against which I want to perform the tests, how do I do a fair test?
Basically, is there some tool that would let me download a 100MB file in several simultaneous TCP streams over HTTP?
Or download several files at once in one go?
What is the rule of thumb in performance characteristics and differences between 7200rpm (SATA/SAS) and 15000rpm (SAS) hard disc drives of the same generation?
I have an OpenBSD server, and I would like to configure a corporate/personal domain name to be Jabber / XMPP enabled, and to automatically proxy (forward/alias) messages between some set of local accounts and a given gmail.com Google Talk account.
Requirements, all must be satisfied:
no new accounts at Google; has to work with an existing gmail.com Google Talk account
no new clients, has to be able to use the corporate domain through existing gmail interfaces
no local accounts to log into, an alias-only solution is needed
In email terms, I'm looking for a virtusertable
(or some such) with Sender Rewrite Scheme.
In other words, it would sound like I need to setup some kind of an XMPP-to-XMPP transport / gateway on my server. There surely must be such a thing, as there are many other kinds of Jabber transports / gateways available, between XMPP and non-XMPP networks; I see my case as similar to that.
(Google Apps does not fit, because it requires new and separate accounts.)
How do I get some stats on unique IPv4 and IPv6 visitors by looking at a nginx access.log with UNIX CLI?
I use the standard combined
pre-defined format for access_log
.
If none of the servers for the whole zone can be contacted, how long will such fact be cached for?
What happens with glue record inconsistency, where every single server that is listed as an NS
server at the parent zone for the child zone, always answers records authoritatively regarding the child zone, but is not necessarily listed as an NS
server within the child zone itself?
For example, if b.dns.ripn.net.
of the parent zone su.
says that my corporate.su.
is controlled by server d.ns.corporate.su.
with IP-address 2001:db8::d
, but when connecting to 2001:db8::d
, some of the following things happen:
the
corporate.su.
zone itself is present as authoritative on the server2001:db8::d
, but there is no mention of anyNS
server that could resolve to an IP-address of2001:db8::d
a record for
d.ns.corporate.su.
is missing from the list ofNS
servers forcorporate.su.
, but anotherNS
record,d.ns.example.net.
, is present that nonetheless still resolves to2001:db8::d
- what if
d.ns.corporate.su.
, from the parent zone, still resolves in the child zone, but to another IP address? - what if
d.ns.corporate.su.
does not even resolve on my authoritative servers, contrary to the glue at the parent zone?
- what if
What if I have several such NS
records in the parent zone of my domain servers that all authoritatively answer queries regarding my zone, but all or some of which have some kind of mismatch in the names of their records within the actual child zone that contradict the parent zone?
I've tried using dig +nssearch
and dig +trace
, but it seems like dig
suffers from various pollution and silent healing issues, and doesn't make it at all that much obvious what actually happens behind the scenes.
Since a couple of years ago, Google Webmaster Tools site ownership verification process started to require that verification files have certain content, instead of simply being there and returning 200 OK
, and making sure that other nearby files would return 404 Not Found
etc.
With the new requirement, how do I serve the Google Webmaster Tools site verification file with nginx.conf
alone?
I'm running http_load
on OpenBSD 5.2, to test how good my nginx
setup is, and I've noticed that cold runs are much faster than warm runs, and with every run the performance goes down very dramatically (e.g. from 3735 replies per second on a cold run, to 2288, 1804, 1553 on subsequent runs).
I've noticed with netstat -n | wc -l
that there are several thousand connections after running http_load
, most of which are in the (state)
of TIME_WAIT
.
It might seem like set timeout tcp.finwait 8
for pf.conf
would reduce some timeout value from 45s to 8s, but it doesn't seem to affect these TIME_WAIT connections at all, which still stay in netstat -n
for exactly 60s from the time they're created through http_load
/ nginx
.
Is there a way to expire these TIME_WAIT connections much sooner than 60s?
On OpenBSD 5.2, the default installation of tomcat-7.0.29 seems to be logging all errors into both catalina.out
and catalina.YYYY-MM-DD.log
.
Cns# ll /var/tomcat/logs/catalina.*
-rw-r--r-- 1 _tomcat _tomcat 3067 Jan 16 20:47 /var/tomcat/logs/catalina.2013-01-16.log
-rw-r--r-- 1 _tomcat _tomcat 1313285 Jan 17 21:47 /var/tomcat/logs/catalina.2013-01-17.log
-rw-r--r-- 1 _tomcat _tomcat 19668 Jan 18 17:33 /var/tomcat/logs/catalina.2013-01-18.log
-rw-r--r-- 1 _tomcat _tomcat 2479 Jan 23 15:25 /var/tomcat/logs/catalina.2013-01-23.log
-rw-r--r-- 1 _tomcat _tomcat 1580 Jan 26 22:58 /var/tomcat/logs/catalina.2013-01-26.log
-rw-r--r-- 1 _tomcat _tomcat 48165 Jan 27 19:30 /var/tomcat/logs/catalina.2013-01-27.log
-rw-r--r-- 1 _tomcat _tomcat 34526 Jan 28 16:41 /var/tomcat/logs/catalina.2013-01-28.log
-rw-r--r-- 1 _tomcat _tomcat 141985 Jan 29 23:56 /var/tomcat/logs/catalina.2013-01-29.log
-rw-r--r-- 1 _tomcat _tomcat 123254 Jan 30 23:25 /var/tomcat/logs/catalina.2013-01-30.log
-rw-r--r-- 1 _tomcat _tomcat 145209 Jan 31 22:30 /var/tomcat/logs/catalina.2013-01-31.log
-rw-r--r-- 1 _tomcat _tomcat 2615 Feb 1 09:01 /var/tomcat/logs/catalina.2013-02-01.log
-rw-r--r-- 1 _tomcat _tomcat 10068 Feb 2 19:18 /var/tomcat/logs/catalina.2013-02-02.log
-rw-r--r-- 1 _tomcat _tomcat 50541 Feb 3 23:49 /var/tomcat/logs/catalina.2013-02-03.log
-rw-r--r-- 1 _tomcat _tomcat 17519 Feb 4 21:29 /var/tomcat/logs/catalina.2013-02-04.log
-rw-r--r-- 1 _tomcat _tomcat 1158 Feb 5 22:18 /var/tomcat/logs/catalina.2013-02-05.log
-rw-r--r-- 1 _tomcat _tomcat 179466 Feb 6 23:51 /var/tomcat/logs/catalina.2013-02-06.log
-rw-r--r-- 1 _tomcat _tomcat 14585534 Feb 7 14:15 /var/tomcat/logs/catalina.2013-02-07.log
-rw-r--r-- 1 _tomcat _tomcat 16680119 Feb 7 14:15 /var/tomcat/logs/catalina.out
(Note how the total files size of catalina.YYYY-MM-DD.log
is about the same as catalina.out
, and the logs do seem duplicated.)
Is there a way to make it log only into catalina.YYYY-MM-DD.log
, and not into catalina.out
?
I'm using NSD3, and I'm unsuccessful in trying to have capital letters in my domain names.
How is it possible to have uppercase letters in your DNS?
In various OSS documentation, it's very common to see Berkeley.EDU capitalised, and indeed their DNS is still capitalised to this day:
% traceroute www.berkeley.edu
…
15 t1-3.inr-201-sut.Berkeley.EDU (128.32.0.65) 168.794 ms 169.906 ms 168.714 ms
16 t5-5.inr-210-srb.Berkeley.EDU (128.32.255.37) 168.850 ms 168.912 ms t5-4.inr-210-srb.Berkeley.EDU (128.32.255.125) 168.886 ms
And in forward DNS, they, too, have various domains capitalised:
% dig @ns.cs.berkeley.edu. cs.berkeley.edu.
…
;; AUTHORITY SECTION:
cs.berkeley.edu. 86400 IN NS cgl.UCSF.edu.
cs.berkeley.edu. 86400 IN NS adns1.berkeley.edu.
cs.berkeley.edu. 86400 IN NS ns.cs.berkeley.edu.
cs.berkeley.edu. 86400 IN NS vangogh.cs.berkeley.edu.
cs.berkeley.edu. 86400 IN NS adns2.berkeley.edu.
cs.berkeley.edu. 86400 IN NS ns.EECS.berkeley.edu.
…
There is documentation around on how to make /usr/local/bin/procmail
work with delivering to a maildir.
However, it is my understanding that it is also possible to avoid procmail
altogether, and have sendmail
's local_procmail
FEATURE
call /usr/local/libexec/dovecot/deliver
/ dovecot-lda
directly, instead of first calling procmail
.
In such case, how would dovecot-lda
know whether it needs to deliver to mbox or maildir?
http://wiki2.dovecot.org/LDA/Sendmail
FEATURE(`local_procmail', `/usr/local/libexec/dovecot/dovecot-lda',`/usr/local/libexec/dovecot/dovecot-lda -d $u')
MODIFY_MAILER_FLAGS(`LOCAL', `-f')
MAILER(procmail)