I'm trying to do some planning and have been experimenting with EBS snapshots for data backups. I want to see how long a snapshot took of a 50G volume, but I only see the start time, not the completion time under the "Description" tag in AWS console. Is there a way to find out this information 'after the fact'?
Nic Cottrell's questions
I have a client who are 'upgrading' their OS but tearing down the entire instance, creating a new one based on the updated AMI and creating the setup including EBS volume. When they delete the EBS volume, all the shapshots are also removed, right? So they lose any backups from the previous AMI instance?
Currently every domain name resolves to my primary server, primary.example.com
. So for example, if I ping randomdomain123.blah
I get:
PING primary.example.com` (1.2.3.4) 56(84) bytes of data.
but am expecting a 'host not found' error.
Initially I thought it was because I had search example.com
in my /etc/resolv.conf
. However, after removing that pinging randomdomain123.blah
still resolves to my primary domain. Restarting the server had no effect either.
I have nothing specified in /etc/hosts
.
Running hostname
from another server in the cluster gives secondary.example.com
.
I use Route 53 as the DNS provider, and relevant DNS seems to be:
example.com. A 1.2.3.4
primary.example.com. A 1.2.3.4
*.primary.example.com. CNAME primary.example.com
*.example.com. CNAME www.example.com
www.example.com. CNAME primary.example.com
So is this a local networking misconfiguration or some DNS problem? (or both?)
Update: The reason I want/need a wildcard is that I run a webapp of this domain so customer1.example.com etc. need to resolve to this machine and it needs to be automatic - so I wanted to avoid having to change the DNS after each new customer signs up.
Update 2: My /etc/resolv.conf
is currently as follows (since I commented out the search line):
### Hetzner Online AG installimage
# nameserver config
nameserver 213.133.99.99
nameserver 213.133.100.100
nameserver 213.133.98.98
nameserver 2a01:4f8:0:a102::add:9999
nameserver 2a01:4f8:0:a0a1::add:1010
nameserver 2a01:4f8:0:a111::add:9898
# search example.com
Update 3: Running dig randomdomain123.blah +trace
gives:
; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.30.rc1.el6 <<>> randomdomain123.blah +trace
;; global options: +cmd
;; Received 12 bytes from 213.133.99.99#53(213.133.99.99) in 0 ms
Update 4: I can confirm that ping randomdomain123.blah.
with the final dot gives:
ping: unknown host randomdomain123.blah.
So does that mean that from a Java app on this machine, I need to append dots and use a URL like http://randomdomain123.blah./somepage.html
to ever generate a HostNotFoundException?
I have several systems running Centos 6 with rkhunter installed. I have a daily cron running rkhunter and reporting back via email.
I very often get reports like:
---------------------- Start Rootkit Hunter Scan ----------------------
Warning: The file properties have changed:
File: /sbin/fsck
Current inode: 6029384 Stored inode: 6029326
Warning: The file properties have changed:
File: /sbin/ip
Current inode: 6029506 Stored inode: 6029343
Warning: The file properties have changed:
File: /sbin/nologin
Current inode: 6029443 Stored inode: 6029531
Warning: The file properties have changed:
File: /bin/dmesg
Current inode: 13369362 Stored inode: 13369366
From what I understand, rkhunter will usually report a changed hash and/or modification date on the scanned files to, so this leads me to think that there is no real change.
My question: is there some other activity on the machine that could make the inode change (running ext4) or is this really yum
making regular (~ once a week) changes to these files as part of normal security updates?
I run a number of tomcat instances and occasionally some just stop reponding to requests - timeout on every connection.
I'm using AJP with mod_proxy in Apache 2.2.
I get a timeout via Apache/AJP through Tomcat's AJP connector, but also via the direct HTTP connector on 8080.
I have /server-status
configured within Apache and it shows 16 requests currently being processed with W
, 4 idle requests and 200+ open slots with no connection. My AJP connector is configured as:
<Connector port="8009" address="localhost"
maxThreads="250" minSpareThreads="5" maxSpareThreads="15"
connectionTimeout="1000"
packetSize="16384"
maxHttpHeaderSize="16384"
enableLookups="false" redirectPort="8443"
emptySessionPath="true" URIEncoding="UTF-8" protocol="AJP/1.3"/>
so it should have plenty of threads to accept new connections.
Using top
I see CPU and wait both under 1% and the java process has 80% memory. There is 60M free mem and 200M free swap.
I set up a special threads.jsp
page using
SystemThreadList stl = new SystemThreadList();
Thread[] allThreads = stl.getAllThreads();
which gives useful information, but in this state - it doesn't load either.
In catalina.log I see:
Mar 07, 2014 11:53:09 AM org.apache.jk.common.ChannelSocket processConnection
WARNING: processCallbacks status 2
and occassional activity from other web requests, but not mine.
Is there a way from the command line, or using a profiler to get a list of threads and stack traces to find out what is getting stuck?
My server responds with Server: Apache/2.2.15 (CentOS)
to all requests. I guess that this gives away my server architecture making it easier to hack attempts.
Is this ever useful to a web browser? Should I keep it on?
I've just gone through my servers and installed yum-cron (and then enabled with chkconfig yum-cron on
since that doesn't seem to happen automatically).
Now I realise that I'm running a MongoDB cluster and that automatically upgrading the mongo-server packages could break and/or corrupt data.
I have considered adding exclude=mongo*
to my yum.conf
file to skip all mongo upgrades, but I would love to still be able to run yum upgrade
manually and get all packages updated.
Is there a neat way of achieving this?
I am setting up an arbiter onto the same machine as a config server, but having some trouble with the default 10gen /etc/init.d/mongod
file.
I tried creating an additional /etc/init.d/mongod-arb
for the arbiter pointing to a new .conf
file, but it seems to ignore the pidfilepath in the conf file and I can only get one mongod to run at any one time...
Is there some best practices for such a configuration?
I have my postfix main.cf configured with a number of blacklists:
smtpd_recipient_restrictions =
permit_mynetworks,
permit_sasl_authenticated,
reject_non_fqdn_sender,
reject_non_fqdn_recipient,
reject_unknown_sender_domain,
reject_unknown_recipient_domain,
reject_unauth_pipelining,
reject_invalid_hostname,
reject_non_fqdn_hostname,
reject_rbl_client opm.blitzed.org,
reject_rbl_client zombie.dnsbl.sorbs.net,
reject_rbl_client cbl.abuseat.org,
reject_rbl_client multi.uribl.com,
reject_rbl_client dsn.rfc-ignorant.org,
reject_rbl_client dul.dnsbl.sorbs.net,
reject_rbl_client sbl-xbl.spamhaus.org,
reject_rbl_client bl.spamcop.net,
reject_rbl_client dnsbl.sorbs.net,
reject_rbl_client ix.dnsbl.manitu.net,
reject_rbl_client combined.rbl.msrbl.net
An incoming mail was just rejected because it appeared on one of these lists (sorbs.net) but not on the others. Is it possible to configure postfix to only reject if 2 or more lists contain that IP address? I'm hoping this will remove false matches...
Are there any known differences between the different US East availability zones? I've noticed 1e is new on the list. Does that mean that the underlying hardware is newer and the EC2 instances potentially have better performance?
It seems that I lock myself into a particular AZ when creating EBS volumes so I wanted to know if there's any "inside knowledge" that might help my decision.
I am running Apache 2.2 with Tomcat 6 and have several layers of URL rewriting going on in both Apache with RewriteRule and in Tomcat. I want to pass through the original REQUEST_URI that Apache sees so that I can log it properly for "page not found" errors etc.
In httpd.conf I have a line:
SetEnv ORIG_URL %{REQUEST_URI}
and in the mod_jk.conf, I have:
JkEnvVar ORIG_URL
Which i thought should make the value available via request.getAttribute("ORIG_URL")
in Servlets.
However, all that I see is "%{REQUEST_URI}"
, so I assume that SetEnv doesn't interpret the %{...}
syntax. What is the right way to get the URL the user requested in Tomcat?
There is a note at https://www.varnish-cache.org/docs/3.0/reference/vcl.html that says
bereq.first_byte_timeout
The time in seconds to wait for the first byte from the backend. Not available in pipe mode.
Does this mean that the first_byte_timeout is ignored for all piped requests to the backend?
Our site has a number of large PDF and MP3 files which we would like to cache in Varnish as static files. Currently we don't do much special - simply remove the cookies in vcl_recv
and set resetp.ttl = 100w;
in vcl_fetch
.
The problem seems to be when one of these files is requested (maybe by older browsers) and it's not already in the Varnish cache. There is a delay while Varnish downloads the file from the backend. My understanding is that it doesn't start delivering to the client until the data is fully loaded. This may take 20 seconds or so and sometimes Adobe Acrobat or the MP3 plugin get confused.
Is there a way to both pass
the content directly while download and save it in the cache for the next matching client request?
We have Varnish 3.0.2 running on Amazon's Linux and it works great. We have a ttl of 48 hours for most content pages and much longer for images, PDFs etc.
This weekend we've taken the backend down for some maintenance, so I upped the ttl to 5 days earlier in the week. I had assumed that anything in cache would continue to be served for up to 5 days, but much to our disappointment we checked varnishstat
this morning and the cache was almost completely empty and varnish was serving "page not found" messages.
I know that this is not what Varnish is designed to do, but why would it reset its cache when the backend is down? And how can I prevent it for next time?
Update 2012-06-11: After looking in the /var/log/messages I see every 3 hours or so:
Jun 9 03:56:31 idea-varnish varnishd[1128]: Manager got SIGINT
Jun 9 03:56:33 idea-varnish varnishd[6708]: Platform: Linux,3.2.18-1.26.6.amzn1.x86_64,x86_64,-smalloc,-smalloc,-hcritbit
Jun 9 03:56:33 idea-varnish varnishd[6708]: child (6709) Started
Jun 9 03:56:33 idea-varnish varnishd[6708]: Child (6709) said Child starts
I guess this is the server crashing and wiping all the objects in memory. I have only just now installed the -debuginfo rpm but not sure that will actually show anything more.
I supposed we could have switched back to disk-based storage during the scheduled downtime? or would a crash like this wipe that anyway?
I'm running Varnish 3.x on a RHEL5 server. After starting varnish, ps ax |grep varnish
gives:
[root@ip-... ec2-user]# ps ax |grep varnish
2747 ? Ss 0:00 /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -T localhost:6082 -f /etc/varnish/idea-int.vcl -u varnish -g varnish
2748 ? Sl 0:00 /usr/sbin/varnishd -P /var/run/varnish.pid -a :80 -T localhost:6082 -f /etc/varnish/idea-int.vcl -u varnish -g varnish
And /var/run/varnish.pid
shows 2747.
Is this normal?
We run varnish with several different backends. I am currently trying to debug some behaviour specific to one of the backends but can't see how to filter on that. Is there a commandline switch?
I have a small cluster of servers balancing a Java web app. Currently I have 3 memcached servers caching data and all web apps shares all 3 memcached instances.
I often get strange slowdowns and timeouts to some of the memcacheds and I wondering if there is a good way of analyzing the performance.
I am wondering whether my iptables rules (or some other system limitation) are blocking/slowing connections. I am considering reconfiguring the web apps so that they only query the memcached process on their own localhost.