Having trouble getting my head around iptables rules
I need to:
- Allow HTTP traffic to 80 and 443 from anywhere
- Allow MySQL traffic 3306 internally
- Allow SSH access from a specific list of IP addresss
Any ideas?
I'm setting up stunnel so a non SSL enabled app can access a Gmail / Google Apps account. Here's the config I'm using:
CLIENT=YES
[pop3s]
accept = 110
connect = pop.gmail.com:995
[imaps]
accept = 143
connect = imap.gmail.com:993
[ssmtp]
accept = 25
connect = smtp.gmail.com:465
I've generated the .pem file, ok. But it fails and logs the following error:
Clients allowed=125
stunnel 4.50 on x86_64-apple-darwin11.2.0 platform
Compiled/running with OpenSSL 0.9.8r 8 Feb 2011
Threading:PTHREAD SSL:ENGINE Auth:none Sockets:SELECT,IPv6
Reading configuration from file ./tools/stunnel.conf
Snagged 64 random bytes from /Users/synergist/.rnd
Wrote 1024 new random bytes to /Users/synergist/.rnd
PRNG seeded successfully
Initializing SSL context for service pop3s
Insecure file permissions on stunnel.pem
Certificate: stunnel.pem
Certificate loaded
Key file: stunnel.pem
Private key loaded
SSL options set: 0x01000004
SSL context initialized
Initializing SSL context for service imaps
Insecure file permissions on stunnel.pem
Certificate: stunnel.pem
Certificate loaded
Key file: stunnel.pem
Private key loaded
SSL options set: 0x01000004
SSL context initialized
Initializing SSL context for service ssmtp
Insecure file permissions on stunnel.pem
Certificate: stunnel.pem
Certificate loaded
Key file: stunnel.pem
Private key loaded
SSL options set: 0x01000004
SSL context initialized
Configuration successful
Option SO_REUSEADDR set on accept socket
Error binding pop3s to 0.0.0.0:110
bind: Permission denied (13)
Service pop3s closed FD=5
str_stats: 168 block(s), 8340 data byte(s), 8400 control byte(s)
Why can't stunnel bind to 110? Is there something already bound to 110, if so how can I find out what this is?
Update: I've got stunnel running by using sudo, is there a way to make it run without?
I'd like to run a backup rsync job from one Drobo to another. I've managed to get rsync and the ssh tools installed so that I can ssh from my desktop into [email protected] and then ssh into [email protected] from drobo1.local
What I need to setup is the backup user on Drobo2 which has a passwordless login from Drobo1.
How do I go about setting this up?
I have a 3GB log file, I need to extract the past 48 hours without downloading the entire 3GB file. How can I split the file up into the past 48 hours. So I can only download that single file?
I have full SSH access and I'm able to install additional tools.
We'd like to redirect all HTTPS traffic to HTTP except for a specific URL which is /user/login
So far we've got:
RewriteCond %{SERVER_PORT} !^443$
RewriteRule ^user/login(.*)$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R]
But it's causing a redirect loop, when it redirects back to HTTP
Some DNS services offer to host your DNS for free with a limit like '50,000 DNS queries a month'.
The whole application has a global expires header set in the .htaccess I have a URL e.g. /current which I need like a different expiry header.
We're having some network issues but were not sure when or where the problem is most affecting users, I'd like to ping a bunch of URLs every few minutes and graph the results.
Is there a simple package to do this on Mac?
How do I set Apache set HTTP headers which cache all resources for 15 mins but also allow .htaccess files in a directory to override these settings on a per-site basis?
Is it possible to download an existing Red Hat installation and use this in a VM that I can run on the desktop.
The problem I'm trying to solve is building an exact replicate of a production server to run locally so we can debug specific issues with package compatibility without breaking everything.
Thanks,
Are there any tools to explore what is currently cached inside a memcached pool? Not some much graphs, but the actual key/values currently stored.
Is there any way to show the most requested URL from apache over a timeframe, e.g. the most requested URL over the past 2 hours.
Is this type of thing possible with mod_status or could I aggregate the access logs?
We have a CISCO hardware load balancer with two web servers behind it. We'd like to force some URLs to only be served by one of the machines.
Firstly, is the job of the load balancer? or would a better approach be create a subdomain such as http://assets.example.com which would be automatically be routed to one of the servers?
I'm trying to setup a .htaccess file which will allow users to bypass the password block if they come from a domain which does not start with preview. e.g. http://preview.example.com would trigger the password and http://example.com would not.
Here's what I've got so far:
SetEnvIfNoCase Host preview(.*\.)? preview_site
AuthUserFile /Users/me/.htpasswd
AuthGroupFile /dev/null
AuthType Basic
AuthName "Development Area"
Require valid-user
Order deny,allow
Allow from 127
deny from env=preview_site
Satisfy any
Any ideas?
I have a .htaccess redirect setup like this
Redirect / http://www.example.com
I need to allow a request to index-new.html to show that file and not do the redirection to example.com
I'm building a fairly large html site which relies on a lot of links between sections that need to be correct.
Is there any way I can check each link on a page and make sure that it doesn't return a 404?
We've got a fresh CentOS box and we need to enable nice urls / permalinks. We're running Apache with mod_rewrite installed but I'm aware I need edit a few config files to get the basics up and running.
What are the steps and files that need to be changed in order to get it up and running?