Let's Encrypt are providing free SSL certificates. Are there any downsides compared to other, paid certificates e.g. AWS Certificate Manager?
ripper234's questions
How can I list the s3fs mounts that exist on an ubuntu system? I'd like to know to which bucket each mount is mapped.
Specifically, I have a specific mount (e.g. ~/s3/mymount), and would like to know to which S3 bucket its mapped.
After some debugging, I found that the core ruleset of mod_security blocks requests that don't have the (optional!) ACCEPT header field.
This is what I find in the logs:
ModSecurity: Warning. Match of "rx ^OPTIONS$" against "REQUEST_METHOD" required. [file "/etc/apache2/conf.d/modsecurity/modsecurity_crs_21_protocol_anomalies.conf"] [line "41"] [id "960015"] [msg "Request Missing an Accept Header"] [severity "CRITICAL"] [tag "PROTOCOL_VIOLATION/MISSING_HEADER"] [hostname "example.com"] [uri "/"] [unique_id "T4F5@H8AAQEAAFU6aPEAAAAL"]
ModSecurity: Access denied with code 400 (phase 2). Match of "rx ^OPTIONS$" against "REQUEST_METHOD" required. [file "/etc/apache2/conf.d/modsecurity/optional_rules/modsecurity_crs_21_protocol_anomalies.conf"] [line "41"] [id "960015"] [msg "Request Missing an Accept Header"] [severity "CRITICAL"] [tag "PROTOCOL_VIOLATION/MISSING_HEADER"] [hostname "example.com"] [uri "/"] [unique_id "T4F5@H8AAQEAAFU6aPEAAAAL"]
Why is this header required? I understand that "most" clients send these, but why is their absence considered a security threat?
I believe this is not possible, but someone I know insisted that it works. I don't even know what parameters to try, and I haven't found this documented anywhere.
I tried http://myserver.com/~user=username&password=mypassword but it doesn't work.
Can you confirm that it's not in fact possible to pass the user/pass via HTTP parameters (GET or POST)?
I found this thread that seems to suggest it's not possible to connect to Amazon VPC VPN from a Windows 7 box without an external hardware (router) on the client side.
Is this true, or did I miss anything?
If it is possible, are there instructions on how to do this?
This is my mod_proxy config:
<IfModule mod_proxy.c>
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
ProxyPass /manage/ http://localhost:9000/manage/
ProxyPassReverse /manage/ http://localhost:9000/manage/
</IfModule>
I find that whenever the other website I have on port 9000 doesn't respond correctly, I get sustained 503 errors - that persist even after the website is fixed. In other words, the 503 response seems to be cached.
How can I disable it? I don't think I have enabled caching myself, perhaps this is the default.
You can use #
to comment out individual lines.
Is there a syntax for commenting out entire blocks?
I've tired surrounding the block (specifically a <Directory>
block) with <IfModule asdfasdf>...</IfModule>
, but that didn't work.
Is storing the logs a blocking action? (Does the request block until logs are written)? Or are they asynchronous?
What happens if writing to the logfile fails, or just take a long time?
We plan to use a tracking pixel to collect some analytics. It would be very helpful to configure a different policy for storing access logs to this pixel (http://ourdomain.com/tracking.png?someParameter=123)
Can Apache be configured to filter and store only the access logs that contain a specific URL pattern on a different location than the main access log? We still want to keep the full access log on the initial partition, with a different retention policy.
I'm currently running a single EC2 instance, and plan to move to a fault tolerant architecture eventually. Something that will help me decide how urgent this migration is is EC2 MTBF.
Is there any data about how often EC2 machines fail?
I'm testing my https page via webpagespeedtest on IE8, and in one run I noticed a bunch of OSCP requests sent to oscp.godaddy.com. I never noticed any such requests in previous runs.
When do browsers decide to send such requests? Does it have to do with the fact I moved hosting providers yesterday?
We have a static family of static websites we're thinking of hosting on Amazon Beanstalk. We currently depend on ssh access for our deployment process:
- We upload a zip file and unzip locally
unzip version.zip
- We maintain symlinks to have shorter alias for some component (e.g. instead of http://oursite.com/verylongcustomername/somemoredetails we user http://oursite.com/K38da/Mc7za
- We're using quick rollback and patching on the server by editing specific files:
mv latest_ver latest_ver.bak;mv older_ver latest_ver
andvim foo.js
We're considering moving to Amazon Beanstalk, and so I installed and configured a sample website. I setup a symlink structure, uploading a version via scp, and edited Tomcat's configuration files. However, I'm not sure if any of these changes are maintained by the Beanstalk manager (in fact I saw some of them did not take when an instance was restarted).
Is there any way to have the Beanstalk manager remember local changes I do to the instance's filesystem, and carry that over to new instances it creates?
If the answer is no, then it seems I should forget about Beanstalk and use an EC2 image directly (I can then create an AMI that includes my custom modifications and relaunch if needed).
I've created a self-signed certificate via openssl for *.mydomain.com, and it works e.g. for www.mydomain.com. However, when I go to mydomain.com directly in Chrome, I get an error (You are attempting to reach mydomain.com, but instead you actually reached a server identifying itself as *.mydomain.com
).
Should the *. certificate cover the main domain as well? What should I do to resolve?
Possible Duplicate:
What’s the easiest way to back up EC2 instances automatically?
I have an EBS-backed Amazon EC2 instance. I would like to create a daily backup schedule, and keep, say, a week's worth of daily backups, plus a few older images (from 2,3,4 weeks ago). I don't mind creating the backups on the fly, with the snapshot mechanism, but I would like an easy wrapper to manage it for me.
What is the simplest way to set this up? How much would this cost me, for a micro instance?
What's the difference between EC2's "linux/unix" server, and "SUSE Linux" ? Why does SUSE cost more? Is it better?
On TCP connections held by NGinx, can it be configured to send TCP Keepalive
(Not HTTP KeepAlive!)
We need to do random reads (seeks) on 5 KB blocks from a huge file (150 gigabyte). What hard drive is the best suited for this type? What is the expected performance from an SSD in this scenario?
I heard that SSD excel in random reads, but perhaps not when the block sizes are this small.
Settings: this is a quad CPU machine, plenty strong, not loaded at all (neither CPU nor network), the client is a Windows Server 2008 64bit, the server is a linux box.
I have four threads that are all issuing HTTP requests starting at the same time. The connections are initiated to IPs X, X, Y, Z (two connections to X, one to Y and Z). All targets are on the local LAN.
I am seeing that connections to X, Y and Z are formed (SYN-SYN/ACK), and the second connection to to X is with a 100 ms delay. Meaning, the machine is not sending the second SYN to X for a full 100 ms.
Could this be related to TCP Offload Engine? What else could be causing this delay?
Edit - Another suspect is the client code - it's written in Java, uses HttpURLConnection.
Does anyone know a (verified) method to cause a specific user to auto-login after a system reboot, that works on Server 2008?
I've tried tweaking some registry values (I don't have the link right now) and we've also tried a couple of programs (one free program didn't work, another one costs money).
Edit Since several people have asked for my reasons - I need to run Selenium web tests on a TeamCity build agent, and they don't work well when the build runs as a windows service. Running them in a user session solves the problem.
For a unit test server (TeamCity Agent) - is there any reason to choose old (and reliable?) Win2003 SP2 over Server 2008, assuming both are available and the machine is decent?