Let's say I have file X with string ABCD. I then edit the file using open X, seek to 0, truncate, write 1234. Will ABCD remain anywhere on the harddrive? Assuming it would be large file over number of sectors / inodes.
Andrew Smith's questions
I have currently Rackspace Cloud Server on static range to send mail.
Amazon SES as part of AWS seems to be designed for "bulk and transitional" emailing.
Is there any solution for "regular email"?
AWS IP range is dynamic, hence, it's impossible to send mail from these. Also, it seems that SES is not the correct way of doing it either, or maybe, it is?
When I go to EC2 console, I click one of:
- Shutdown
- Terminate
- Reboot
- Start
... I want to get a message on my "Director" machine, that the instance is e.g. booting, that it is shutting down, actually without need to poll all the statuses from the API.
The reason for this is, that I run my own Nagios monitoring with API, so I want director machine to receive / catch the event, and schedule the Down Time on Nagios.
Also, if it's "Terminate", the Director will update Chef server the way that it will remove the node.
Another reason for this is that polling the statuses very frequently is very limited to the API and I cant make continuous requests, but every 1-5 minutes. But this way, if I terminate the instance myself from the console, it takes too long for me to actually discover this information, and update other services accordingly.
Currently I am investigating that I could use the shutdown script on Linux instance, which would make remote API call to Director, and Director at the same time would be polling the statuses as well. However the best way would be to have a queue of the messages from EC2 directly without need to write additional APIs were there are reliable notifications about events on the instances.
I would really prefer to use AWS functionality to do it, except for Nagios.
I have Nagios, and I want it to stop monitoring instances when they are stopped from the console. The requirements are:
The message passed from AWS is 100R% reliable, e.g. when Nagios is down, and the message cannot be delivered, it will be re-delivered promptly when Nagios is up
The message will pass quickly
There is no need to scan status of all instances via EC2 API all the time, but only once a while
Many thanks!
I have few Unicorn servers running on Ubuntu 12.04 and I am looking to secure them against exploits which give remote shell.
My main concern is, if it makes sense to deploy ModSecurity?
Another thing is, that I have seen Unicorn typically runs from port 8080, and it's forwarded to Apache/NginX server port 80 which serves as reverse-proxy.
I was thinking that I could employ the following:
- ModSecurity on Apache
- Apache as worker (threaded) with mod_qos, to prevent excessive no of requests from any host
- Run unicorn server from designated user and isolate it thru AppArmor or SELinux if it's Redhat/Centos
I would like to know, if there is another hardening framework / patch for RoR like PHP Suhosin.
I am looking into making performance data like page response time integrated with Chef. Does anyone has a clue what would be the starting point?
I have already deployed Chef, I have enabled performance data collection, but it doesnt seem to actually collect this data.
Is there any other package, which would be useful in drawing page response time, which is suitable for automated deployment? It doesnt need to be already for Chef, but something useful, or maybe there is some way to get it with the nagios? I have already looked for this with no success. Many thanks!
I would also like to collect the load, network etc. I would prefer to use local agents instead of network.
I am looking for a way to assure that all Redhat boxes have same packages and configuration. For example, when I add new server, it is configured and the packages installed.
Same for Ubuntu.
I can make manually script which will replicate the master server, but I need to know if there is any other solution to accomplish this in a way that clueless operator can handle it, or it's fully automatic without need of command-line.
On Windows, I use Domain Controller, it works OK - it configures and updates all systems. I have only two Linux boxes, hence the issue is not big versus hundreds of Windows ones, but eventually I will scale the setup globally, so I need to prepare some plan for Linux too.
Please do not reply with "this is not possible", as it's waste of time. I am developing cloud appliance and I have a valid reason to protect this layer against DDoS, and there are few companies doing the same, so please dont tell me that I dont have the point, as many companies looking to buy this solution and I dont see the problem with implementing it using stock Linux
My Linux kernel is crashing with oops on 10.000 connections due to lack of resources like CPU and RAM. I was wondering how to limit it safely that it doesn't create tcp/ip connections in netfilter connection tracking table or elsewhere when somebody is trying to open 100.000 connections from various hosts?
The network card is 1GBps and with maxed buffers, it can take lot's of connections however I would like to have it to only 5.000 at the same time and the rest being dropped except when there are free connection slots. At the kernel level, so it doesn't pollute netfilter or anything, and it's dropped as soon as possible. There are these factors:
- Number of HAProxy connections is limited to only 5.000
- Linux is crashing with 10.000 open connections
- I want to withstand the 100.000 open connections every minute, so maybe netfilter can handle it, but no HAProxy.
- The existing connections continue to operate
This is to make machine withstand DDoS attack without making oops, and as soon as attack goes off, the service recovers automatically the way that it continues to serve as normal at low rate.
This is about the physical layer of the server instance, not the switch. Assuming that switch is passing to me this much traffic I can handle, upstream provider does not always have possibility to adjust or to protect at all from this.
When I am experiencing DDoS @10Gbps, if I have BGP router with 10M table entries in it, can I perform search on the offensive network?
I would do this the way, that first I would remove routing to me for first /8 and see if DDoS will stop. And then search this way the source of the DDoS on the complete 32bit address space.
I am not familiar with BGP a lot, not sure how long it propagates and how long such search would take and what would be impact. Also not sure if I can actually prevent some network stop routing to me by their ip numbers I download from RIPE and Arin.
This is particularly for dealing with spoofed attacks, as normal ones can be traced more effectively.
Or how much bandwidth do I need and no of locations to sustain any kind of DDoS in Europe? I can re-route traffic with Route 53 latency based DNS. Recent disclosed strike I read about was around 13Gbps, would 20Gbps be enough?
I am building DNS service and I need to find out location of DNS servers querying me. Apparently anyone who is using Google public DNS like 8.8.8.8 is coming from Google /17 network, and if I do geoip lookup, it resolves to US, but the server is in Ireland.
Do I really need to run traceroute for IP each to find out how far it is?
Also, bgp reports that it's /23 subnet however I have no idea how to get this information up to date either. http://bgp.he.net/ip/209.85.143.94
[root@test ~]# host 209.85.143.94
94.143.85.209.in-addr.arpa domain name pointer dy-in-f94.1e100.net.
[root@test ~]# geoiplookup 209.85.143.94
GeoIP Country Edition: US, United States
[root@test ~]# traceroute 209.85.143.94
traceroute to 209.85.143.94 (209.85.143.94), 30 hops max, 60 byte packets
1 31.222.167.2 (31.222.167.2) 1.344 ms 2.273 ms 2.135 ms
2 core6a-aggr325a-4.lon3.rackspace.net (92.52.77.106) 3.386 ms 3.326 ms 3.316 ms
3 corea-core6a.lon3.rackspace.net (164.177.137.10) 3.290 ms 3.249 ms coreb-core6a.lon3.rackspace.net (164.177.137.22) 3.222 ms
4 edge1-coreb.lon3.rackspace.net (164.177.137.29) 3.199 ms 3.161 ms edge1-corea.lon3.rackspace.net (164.177.137.27) 3.138 ms
5 195.50.122.41 (195.50.122.41) 3.118 ms 3.075 ms 3.047 ms
6 195.50.122.82 (195.50.122.82) 3.021 ms 1.523 ms 1.589 ms
7 209.85.255.78 (209.85.255.78) 2.940 ms 209.85.255.76 (209.85.255.76) 2.877 ms 209.85.255.78 (209.85.255.78) 2.873 ms
8 209.85.253.90 (209.85.253.90) 2.844 ms 2.817 ms 2.789 ms
9 209.85.250.216 (209.85.250.216) 18.132 ms 209.85.251.190 (209.85.251.190) 14.584 ms 209.85.250.216 (209.85.250.216) 18.049 ms
10 209.85.253.125 (209.85.253.125) 14.537 ms 209.85.253.203 (209.85.253.203) 16.017 ms 209.85.253.127 (209.85.253.127) 14.484 ms
11 216.239.43.22 (216.239.43.22) 22.787 ms 22.739 ms 216.239.47.26 (216.239.47.26) 25.445 ms
12 dy-in-f94.1e100.net (209.85.143.94) 17.312 ms 12.970 ms 15.537 ms
I have 20 servers in 1 location and I would like to do load balancing. This can go up to 100 in any case. Is there any known method for doing this? I would like also to have some kind of mechanism that when customer queries e.g. service.example.com, he will keep using the same server until the machine is taken out of the cluster because of fail. The client makes a DNS query every minute and let's assume top down DNS record TTL is 30-60 seconds, while others can be well 24h or so, so each session can be valid for up to 24h after initial query, and then the customer will be switched to another server.
I find L4-L7 load balancers quite useless, as I think I could use just the DNS.
The protocols are the binary only based TCP connections as well HTTP ones.
I was thinking that for binary connection (like with no protocol whatsoever), I could use round-robin on the DNS, so each time I query the DNS, I got different response.
For HTTP this is something I dont know. I could put maybe HAProxy and that's it, but for the DNS I am not sure.
I was thinking once about the following method. Send customer to some "master.example.com", which is HTTP server making only redirects, which will generate FQDN with uniqueid, which means that it's like a session id. This way, this FQDN will always resolve to same ip number and can be used only for the ip which queried for it, for the next 24h or forever until the server is switched off.
So this looks like the following:
http://redirect.example.com/resource => http://67hkkdbvh.example.com/resource
Now the session looks like this:
1st minute: http://67hkkdbvh.example.com/resource/1 TTL 60s
2nd minute: http://67hkkdbvh.example.com/resource/2 TTL 60s
If the 67hkkdbvh will die, the customer requests redirect again.
Now I am not sure how I could use HAProxy to help with it???
I am looking for some sort of clustered /dev/shm like device or RAM based, clustered filesystem, running on Linux, so it supports mirroring. Or there is anything stable enough which would do the job the same way? So I can replicate RAM based data? I would like to have it self-repairing (e.g. just restart recovers the service).