SnapOverflow

SnapOverflow Logo SnapOverflow Logo

SnapOverflow Navigation

  • Home
  • Server
  • Ubuntu

Mobile menu

Close
  • Home
  • System Administrators
    • Hot Questions
    • New Questions
    • Tags
  • Ubuntu
    • Hot Questions
    • New Questions
    • Tags
  • Help
Home / user-80776

Smudge's questions

Martin Hope
Smudge
Asked: 2012-03-11 06:56:26 +0800 CST

Change standalone Jenkins default URL

  • 4

Running Jenkins standalone (The WAR file, java -war ./jenkins.war or whatever the command is) listening on a non-standard port. I want to get NGINX to proxy from our HTTPS site /jenkins/ to this standalone.

location /jenkins/ {
    proxy_pass http://axolotl.ecogeek.local:10112/;
}

Problem is, Jenkins still things it's at the root URL so all requests are relative to that, so when accessing /jenkins/ it redirects to /login but it needs to go to /jenkins/login, is there any way, through Jenkins or through NGINX, to change the root URL and get it to play nice?

edit

Connecting to Jenkins on it's normal port I can go to Manage Jenkins->Configure System->Jenkins URL and change that, that fixes the redirection but all the media is being requested from /static/ (Have tried restarting Jenkins)

nginx jenkins
  • 2 Answers
  • 16405 Views
Martin Hope
Smudge
Asked: 2012-02-11 01:31:39 +0800 CST

Nagios custom variables for object inheritance

  • 4

In our Nagios setup we're using templates and object inheritance for services and hosts.

#Le Hosts
define host{
    use            linux-nrpe,linux-dc3,linux-cassandra
    host_name      tigris
    alias          tigris
    address        192.168.4.72
    }

define host{
    use            linux-nrpe,linux-dc3,linux-cassandra
    host_name      euphrates
    alias          euphrates
    address        192.168.4.177
    }

#Le Templates
define host{
    name           linux-nrpe
    use            all-hosts
    hostgroups     linux-nrpe
    contact_groups rhands,usergroup1,opcomms
    register       0
}

#Le Services
define service{
    hostgroup_name      linux-nrpe
    use                 high-priority-service,graphed-service
    service_description Load
    check_command       check_by_nrpe!check_load!5,5,6!9,9,9
    contact_groups      rhands,usergroup1,opcomms
    }
[...etc...]

The problem with this setup is all servers in the linux-nrpe group trigger alerts when their load levels hit whatever is defined in the service, but our workhorse servers might run 24/7 at a load of 20 but our DB servers sit quite happily at ~1 unless something goes wrong, so we find the system sending out too many alerts or having to ignore/not alert on things. Defining individual service definitions for each server (lots of them) would take ages, what we'd really like to do is something like

define host{
    name           linux-nrpe
    use            all-hosts
    hostgroups     linux-nrpe
    contact_groups rhands,usergroup1,opcomms
    register       0
    perf_load      2,2,3 5,5,6
    perf_mem       95% 97%
    [...more...]
    }

define service{
    hostgroup_name      linux-nrpe
    use                 high-priority-service,graphed-service
    service_description    Load
    check_command       check_by_nrpe!check_load!$perf_mem$
    contact_groups      rhands,usergroup1,opcomms
    }

I looked through the docs and couldn't see anything, unless I'm missing something. Any ideas?

nagios
  • 2 Answers
  • 5395 Views
Martin Hope
Smudge
Asked: 2012-02-08 14:06:31 +0800 CST

Should VCenter be a VM?/Can it be external hardware

  • 6

We have 6 ESX servers running +150 VMs. Currently our VCenter server is one of these VMs. The other day we had a hardware failure in our DC (caused by a naughty UPS) which took out two of these servers. The first server it took out was running our primary VCenter server, the second running our HA/Heartbeat VCenter server, thus none of our hosts migrated off our two failed hosts onto the 4 working ones and we lost most of our VM management (users all use VSphere). This is a very unfortunate circumstance, and hopefully shouldn't happen too often, but I was wondering, is it a good idea to run our primary VCenter server on a separate box in a different datacenter*/redundant block dedicated to just VCenter, with the backup being a VM? Is it even possible? (All we have is the virtual appliance, though if it's available I wouldn't have thought it's too hard to track down).

*I'm ashamed to say, we run all our VMWare servers in a single DC. We mirror the SAN to a second DC but we have no servers there. They are only development/non-critical servers but people still shout if they're down.

vmware-vcenter vmware-esxi
  • 2 Answers
  • 303 Views
Martin Hope
Smudge
Asked: 2012-01-08 06:33:44 +0800 CST

NGINX convert HEAD to GET requests

  • 11

Due to some terrible design decisions, we have an application unable to respond to HTTP HEAD requests (Returns 'Method Not Allowed'). Modifying the software to return HEAD requests correctly would be tricky, not impossible but extra work. The application sits behind an NGINX proxy, I was wondering if there was a way to get NGINX to convert HEAD requests it received from clients into GET requests to the back-end, then discard the response except for the headers and send it back to the client as though our application servers were able to respond to HEAD requests.

Current config (fairly standard)

upstream ourupstream{
    server unix:/var/apps/sockets/ourapp.socket.thread1
    server unix:/var/apps/sockets/ourapp.socket.thread2
    server unix:/var/apps/sockets/ourapp.socket.thread3
    [like 20 of these]
}

server {
    listen       1.2.3.4:80;
    server_name  ourapp;

    access_log  /var/apps/logs/ourapp.nginx.plog    proxy;
    error_log   /var/apps/logs/ourapp.nginx.elog    info;

    gzip on;

    gzip_types  text/plain text/html;

    proxy_intercept_errors on;
    proxy_connect_timeout 10;
    proxy_send_timeout 10;
    proxy_read_timeout 10;
    proxy_next_upstream error timeout;
    client_max_body_size 2m;

    error_page 404 /static/404.html;
    error_page 500 501 502 503 504 =500 /static/500.html;

    location / {
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass http://ourupstream/;
    }

    location /static/ {
        root /var/apps/global/;
    }
}
nginx
  • 1 Answers
  • 9081 Views
Martin Hope
Smudge
Asked: 2011-12-29 06:47:10 +0800 CST

NGINX 'global' location

  • 10

Is it possible to create a 'global' location for an NGINX server? I'd like every site served by NGINX to have access to a /global/ folder; along the lines of

http {
    [...stuff...]

    #Global path
    location /global/ {
        root /my/global/location/;
    }

    server {
        listen          127.0.0.1:80;
        server_name     example.com;

        [...standard config...]
    }

    server {
        listen          127.0.0.1:80;
        server_name     example.org;

        [...standard config...]
    }

    server {
        listen          127.0.0.1:80;
        server_name     example.net;

        [...standard config...]
    }
}

And be able to access files in the global location from http://example.com/global/ http://example.org/global/ etc.

I can do this if I add the global location block to every server block but that's annoying, I'd like to have it defined globally and be able to access it from within the sites.

I could use an include directive in each host, but it still requires specification in each host. The NGINX wiki says the 'location' block is only valid within the server context but I didn't know if there was a rewrite trick or something similar.

nginx
  • 3 Answers
  • 5294 Views
Martin Hope
Smudge
Asked: 2011-12-16 04:10:47 +0800 CST

GIT as a backup tool

  • 120

On a server, install git

cd /
git init
git add .
git commit -a -m "Yes, this is server"

Then get /.git/ to point to a network drive (SAN, NFS, Samba whatever) or different disk. Use a cron job every hour/day etc. to update the changes. The .git directory would contain a versioned copy of all the server files (excluding the useless/complicated ones like /proc, /dev etc.)

For a non-important development server where I don't want the hassle/cost of setting it up on a proper backup system, and where backups would only be for convenience (I.E. we don't need to backup this server but it would save some time if things went wrong), could this be a valid backup solution or will it just fall over in a big pile of poop?

backup git
  • 16 Answers
  • 75578 Views
Martin Hope
Smudge
Asked: 2011-12-08 11:58:38 +0800 CST

NAGIOS notification command for service/contactgroup

  • 5

We have 3 templates for services, low-priority, high-priorty and medium-priorty. Low priority services are attached to the contactgroup low, medium priorty services are attached to the contact group medium and high priority (OK you probably get the idea).

Low priority services don't notify, medium priority alerts notify by email and high priority alerts notify by email and phone. Except they don't (yet)

What I need to do is specify that any alerts generated by high-priorty services should run the notification commands notify-service-by-email and notify-service-by-phone. From reading the documentation (and my knowledge of Nagios) the only way I know to set notification commands is the service_notification_commands option in a contact, but that would mean each contact would need two definitions, one for phone and one for email. How would I get high-priority services to call the notify-service-by-phone command?

More info;

Service Templates

;High priority service (Alert by call, 1 min check period)
define service{
    name                high-priority-service
    notifications_enabled       1
    normal_check_interval       1
    contact_groups                  high
    use             generic-service
    register            0
    }

;Med priority service (Alert by email, 5 min check period)
define service{
    name                med-priority-service
    notifications_enabled       1
    normal_check_interval       5
    contact_groups                  medium
    use             generic-service
    register            0
    }

;Low priority service (No alert, 10 min check period)
define service{
    name                low-priority-service
    normal_check_interval       10
    use             generic-service
    register            0
    }

(generic-service is the default template from NAGIOS configs with a few tweaks)

Services

define service{
        use                             high-priority-service
        hostgroup_name                  generic-server-nrpe
        service_description             SSH
        check_command                   check_ssh
        }

Contact groups

define contactgroup{
        contactgroup_name       low
        alias                   Low Priority Notifications
        members                 sam,[...]
        }

define contactgroup{
        contactgroup_name       medium
        alias                   Medium Priority Notifications
        members                 sam,[...]
        }

define contactgroup{
        contactgroup_name       high
        alias                   High Priority Notifications
        members                 sam,[...]
        }

Contacts

define contact{
        name                            generic-contact
        service_notification_period     24x7
        host_notification_period        24x7
        service_notification_options    w,u,c,r,f,s
        host_notification_options       d,u,r,f,s
        service_notification_commands   notify-service-by-email
        host_notification_commands      notify-host-by-email
        register                        0
        }

define contact{
    contact_name    sam
    use             generic-contact
    alias           Sam
    email           sam[...]
    address1        +44[...]
    }
nagios
  • 1 Answers
  • 11364 Views
Martin Hope
Smudge
Asked: 2011-11-09 01:47:40 +0800 CST

Direct ethernet link between two servers

  • 16

Say I had two servers which needed super-low latency (Database, file etc.). Would it be possible to directly connect the two servers with 10GbE, so each server had 1 (in the real world it would have 2) connections to the 'main' network, but 1 network card with an ethernet cable that connected directly to the second server, no switches or routers, just a direct connection

                         Internet/Datacenter
                                 |
                                 |
                                 |
                                 |
                                 |
                                 |
                                 |
                        --------------------
                        |                  |
            ------------|      Switch      |-----------
            |           |                  |          |
            |           --------------------          |
            |                                         |
            |                                         |
            |                                         |
            |                                         |
            |                                         |
            |                                         |
            |                                         |
  Network Card 1 (eth0)                     Network Card 1 (eth0)
            |                                         |
  --------------------                      --------------------
  |                  |                      |                  |
  |     Server 1     |                      |     Server 2     |
  |                  |                      |                  |
  --------------------                      --------------------
            |                                         |
  Network Card 2 (eth1)                     Network Card 2 (eth1)
            |                                         |
            |                                         |
            |               Direct 10GbE              |
            -------------------------------------------

My first question is, would this even be possible? Would they need any unusual/special services configured to let them talk over this network other than a standard file in /etc/sysconfig/network-scripts/? They would both have static IPs on eth1 but how would things like routing work? I'm not an expert on networking so this is probably a n00b-ish question

Second question, is there any point? Would there be any advantages doing this over just letting them communicate over the standard network connection via the switch, or giving them a second dedicated network just for communicating intra-server (Since bandwidth would be used on the standard network by clients accessing the servers). Assuming latency was the priority.

I know there are some issues with this method, like when we came to add a 3rd server we'd ether have to give every server another network card and probably set up some very complicated replication triangle thingy but since this is hypothetical lets ignore that.

And since latency is the key issue, would fiber be better over ethernet (Speed isn't important so long as it can do a couple of Gb/sec)

I phrased this question from a linux POV, because that's my background, but it could apply to any server/device

networking ethernet fiber
  • 6 Answers
  • 27142 Views
Martin Hope
Smudge
Asked: 2011-09-04 11:28:15 +0800 CST

Postfix SMTP banner show multiple hostnames

  • 12

We have a Postfix SMTP server that's bound to two IP addresses and serves mail for two different domains. We can't change the domains to both use the same, single IP. Both IP addresses have reverse DNS

  • 1.1.1.1 reverses to mail.domain1.com
  • 2.2.2.2 reverses to mail.domain2.com

In our Postfix config I have

myhostname = mail.domain1.com
myhostname = mail.domain2.com

inet_interfaces = 1.1.1.1, 2.2.2.2

smtpd_banner = $myhostname Mail Server

(And some other stuff, which I think is irrelevant)

Using MXToolbox, running the smtp test command on mail.domain2.com returns everything as OK, however running it on mail.domain1.com returns an error because Postfix is identifying itself as 'mail.domain2.com' in the SMTP banner, how do I get it to return the correct banner based on the external IP address used?

postfix
  • 1 Answers
  • 14721 Views
Martin Hope
Smudge
Asked: 2011-07-31 03:57:48 +0800 CST

NGINX return correct headers with custom error documents

  • 6

I've set up NGINX to return custom error documents for my proxied server, it shows the correct file but always returns a 200 OK header.

The relevant NGINX config is

server {
    listen       94.23.155.32:80;
    server_name  rmg.io www.rmg.io;

    proxy_intercept_errors on;

    location / {
        proxy_pass http://rmgshort/;
    }

    error_page 404 = /error/404.html;
error_page 500 501 502 503 503 = /error/500.html;

    location /error/ {
        root /var/rmg/;
    }
}

You can test this if you want, this page should return a 404 error, it returns the correct document but changes the status code to '200 OK' (Test HTTP headers here), if I replace root /var/rmg/ with internal; the correct header is returned but then my custom error page doesn't work.

How do I get NGINX to return my custom error document with the correct status header?

I'm running NGINX 1.0.4 on RHEL 6.1

nginx
  • 1 Answers
  • 9476 Views
Martin Hope
Smudge
Asked: 2011-07-23 14:44:34 +0800 CST

Our security auditor is an idiot. How do I give him the information he wants?

  • 2445

A security auditor for our servers has demanded the following within two weeks:

  • A list of current usernames and plain-text passwords for all user accounts on all servers
  • A list of all password changes for the past six months, again in plain-text
  • A list of "every file added to the server from remote devices" in the past six months
  • The public and private keys of any SSH keys
  • An email sent to him every time a user changes their password, containing the plain text password

We're running Red Hat Linux 5/6 and CentOS 5 boxes with LDAP authentication.

As far as I'm aware, everything on that list is either impossible or incredibly difficult to get, but if I don't provide this information we face losing access to our payments platform and losing income during a transition period as we move to a new service. Any suggestions for how I can solve or fake this information?

The only way I can think to get all the plain text passwords, is to get everyone to reset their password and make a note of what they set it to. That doesn't solve the problem of the past six months of password changes, because I can't retroactively log that sort of stuff, the same goes for logging all the remote files.

Getting all of the public and private SSH keys is possible (though annoying), since we have just a few users and computers. Unless I've missed an easier way to do this?

I have explained to him many times that the things he's asking for are impossible. In response to my concerns, he responded with the following email:

I have over 10 years experience in security auditing and a full understanding of the redhat security methods, so I suggest you check your facts about what is and isn't possible. You say no company could possibly have this information but I have performed hundreds of audits where this information has been readily available. All [generic credit card processing provider] clients are required to conform with our new security policies and this audit is intended to ensure those policies have been implemented* correctly.

*The "new security policies" were introduced two weeks before our audit, and the six months historical logging was not required before the policy changes.

In short, I need;

  • A way to "fake" six months worth of password changes and make it look valid
  • A way to "fake" six months of inbound file transfers
  • An easy way to collect all the SSH public and private keys being used

If we fail the security audit we lose access to our card processing platform (a critical part of our system) and it would take a good two weeks to move somewhere else. How screwed am I?

Update 1 (Sat 23rd)

Thanks for all your responses, It gives me great relief to know this isn't standard practice.

I'm currently planning out my email response to him explaining the situation. As many of you pointed out, we have to comply with PCI which explicitly states we shouldn't have any way to access plain-text passwords. I'll post the email when I've finished writing it. Unfortunately I don't think he's just testing us; these things are in the company's official security policy now. I have, however, set the wheels in motion to move away from them and onto PayPal for the time being.

Update 2 (Sat 23rd)

This is the email I've drafted out, any suggestions for stuff to add/remove/change?

Hi [name],

Unfortunately there is no way for us to provide you with some of the information requested, mainly plain-text passwords, password history, SSH keys and remote file logs. Not only are these things technically impossible, but also being able to provide this information would be both a against PCI Standards, and a breach of the data protection act.
To quote the PCI requirements,

8.4 Render all passwords unreadable during transmission and storage on all system components using strong cryptography.

I can provide you with a list of usernames and hashed passwords used on our system, copies of the SSH public keys and authorized hosts file (This will give you enough information to determine the number of unique users can connect to our servers, and the encryption methods used), information about our password security requirements and our LDAP server but this information may not be taken off site. I strongly suggest you review your audit requirements as there is currently no way for us to pass this audit while remaining in compliance of PCI and the Data Protection act.

Regards,
[me]

I will be CC'ing in the company's CTO and our account manager, and I'm hoping the CTO can confirm this information is not available. I will also be contacting the PCI Security Standards Council to explain what he's requiring from us.

Update 3 (26th)

Here are some emails we exchanged;

RE: my first email;

As explained, this information should be easily available on any well maintained system to any competent administrator. Your failure to be able to provide this information leads me to believe you are aware of security flaws in your system and are not prepared to reveal them. Our requests line up with the PCI guidelines and both can be met. Strong cryptography only means the passwords must be encrypted while the user is inputting them but then they should be moved to a recoverable format for later use.

I see no data protection issues for these requests, data protection only applies to consumers not businesses so there should be no issues with this information.

Just, what, I, can't, even...

"Strong cryptography only means the passwords must be encrypted while the user is inputting them but then they should be moved to a recoverable format for later use."

I'm going to frame that and put it on my wall.

I got fed up being diplomatic and directed him to this thread to show him the response I got:

Providing this information DIRECTLY contradicts several requirements of the PCI guidelines. The section I quoted even says storage (Implying to where we store the data on the disk). I started a discussion on ServerFault.com (An on-line community for sys-admin professionals) which has created a huge response, all suggesting this information cannot be provided. Feel free to read through yourself

https://serverfault.com/questions/293217/

We have finished moving over our system to a new platform and will be cancelling our account with you within the next day or so but I want you to realize how ridiculous these requests are, and no company correctly implementing the PCI guidelines will, or should, be able to provide this information. I strongly suggest you re-think your security requirements as none of your customers should be able to conform to this.

(I'd actually forgotten I'd called him an idiot in the title, but as mentioned we'd already moved away from their platform so no real loss.)

And in his response, he states that apparently none of you know what you're talking about:

I read in detail through those responses and your original post, the responders all need to get their facts right. I have been in this industry longer than anyone on that site, getting a list of user account passwords is incredibly basic, it should be one of the first things you do when learning how to secure your system and is essential to the operation of any secure server. If you genuinely lack the skills to do something this simple I'm going to assume you do not have PCI installed on your servers as being able to recover this information is a basic requirement of the software. When dealing with something such as security you should not be asking these questions on a public forum if you have no basic knowledge of how it works.

I would also like to suggest that any attempt to reveal me, or [company name] will be considered libel and appropriate legal action will be taken

Key idiotic points if you missed them:

  • He's been a security auditor longer than anyone else on here has (He's either guessing, or stalking you)
  • Being able to get a list of passwords on a UNIX system is 'basic'
  • PCI is now software
  • People shouldn't use forums when they're not sure of security
  • Posing factual information (to which I have email proof) online is libel

Excellent.

PCI SSC have responded and are investigating him and the company. Our software has now moved onto PayPal so we know it's safe. I'm going to wait for PCI to get back to me first but I'm getting a little worried that they might have been using these security practices internally. If so, I think it is a major concern for us as all our card processing ran through them. If they were doing this internally I think the only responsible thing to do would be to inform our customers.

I'm hoping when PCI realize how bad it is they will investigate the entire company and system but I'm not sure.

So now we've moved away from their platform, and assuming it will be at least a few days before PCI get back to me, any inventive suggestions for how to troll him a bit? =)

Once I've got clearance from my legal guy (I highly doubt any of this is actually libel but I wanted to double check) I'll publish the company name, his name and email, and if you wish you can contact him and explain why you don't understand the basics of Linux security like how to get a list of all the LDAP users passwords.

Little update:

My "legal guy" has suggested revealing the company would probably cause more problems than needed. I can say though, this is not a major provider, they have less 100 clients using this service. We originally started using them when the site was tiny and running on a little VPS, and we didn't want to go through all the effort of getting PCI (We used to redirect to their frontend, like PayPal Standard). But when we moved to directly processing cards (including getting PCI, and common sense), the devs decided to keep using the same company just a different API. The company is based in the Birmingham, UK area so I'd highly doubt anyone here will be affected.

security pci-dss
  • 30 Answers
  • 517473 Views
Martin Hope
Smudge
Asked: 2011-07-09 07:25:09 +0800 CST

Postfix 'load balance' sending IPs

  • 5

I got a server with 8 IP addresses to use as a mail server (With PostFix). I want PostFix to rotate the IP and hostname for each message. I found the config parameter

smtp_bind_address = 1.2.3.4

(And there's another one I can't remember that does hostname) But that only lets me bind to one IP/hostname.

Example;
I have these IP's:

1.1.1.1 => mail1.mydomain.com
1.1.1.2 => mail2.mydomain.com
1.1.1.3 => mail3.mydomain.com
[etc]

The first message should be sent from 1.1.1.1, second from 1.1.1.2, third from 1.1.1.3 etc. so just round-robin balancing the avaliable IPs

Is this possible with Postfix?

linux email postfix
  • 3 Answers
  • 4906 Views
Martin Hope
Smudge
Asked: 2011-07-03 06:27:55 +0800 CST

Crond offset five minute schedule

  • 26

Is it possible to offset a cron script set to run every 5 minutes?

I have two scripts, script 1 collects some data from one database and inserts it into another, script 2 pulls out this data and a lot of other data and creates some pretty reports from it. Both scripts need to run every 5 minutes. I want to offset script 2 by one minute so that it can create a report from the new data. E.G. I want script one to run at :00, :05, :10, :15 [...] and script two to run at :01, :06, :11, :16 [...] every hour. The scripts are not dependent on each other, and script 2 must run regardless of whether script one was successful or not. But it would be useful if the reports could have hte latest data. Is this possible with cron?

Post;
I have thought about using both commands in a shell script so they run immediately after each other but this wouldn't work, sometimes script 1 can get hung up on waiting for external APIs etc. so might take up to 15 mins to run, but script 2 must run every 5 minutes so doing it this way would stop/delay the execution of script 2. If I could set this in Cron it would mean script 2 would run regardless of what script 1 was doing

cron
  • 5 Answers
  • 27847 Views
Martin Hope
Smudge
Asked: 2011-06-24 05:33:32 +0800 CST

`Permission Denied` to CD into a directory even though permissions are correct

  • 22

This is so wierd. Logged in to a Linux (RHEL) box as a user 'g', doing an ls -lah shows

drwxrwxrwx 6 g    g    4.0K Jun 23 13:27 .
drwxrw-r-x 6 root root 4.0K Jun 23 13:15 ..
-rwxrw---- 1 g    g     678 Jun 23 13:26 .bash_history
-rwxrw---- 1 g    g      33 Jun 23 13:15 .bash_logout
-rwxrw---- 1 g    g     176 Jun 23 13:15 .bash_profile
-rwxrw---- 1 g    g     124 Jun 23 13:15 .bashrc
drw-r----- 2 g    g    4.0K Jun 23 13:25 .ssh

So the user 'g' in group 'g' /should/ be able to read and write to the .ssh directory but if I do ls -lah .ssh/ I get ls: .ssh/: Permission denied. I also get Permission denied if I try and cat any files in the directory

If I go in as root and change the permissions to 700, 744, 766 or anything as long as the 'user' permission is 7 it works and I can CD and LS the directory and files within.

id g returns

uid=504(g) gid=506(g) groups=506(g)

Edit:

I've copied these permissions exactly to another identical box and there is no issue. I can cd into a directory without execute permissions.

linux permissions
  • 5 Answers
  • 85194 Views
Martin Hope
Smudge
Asked: 2011-06-21 14:02:05 +0800 CST

Running a mail forwarding service, what are some steps I should take to prevent my IPs being blacklisted

  • 4

Background:
I'm about to launch a service which offers email forwarding for a large number of domains.

Say you want people to be able to contact you using [email protected], all emails sent to that address would be forwarded to [email protected]

We will have 3 servers with their own IPs, the servers will run Postfix and all domains will have abuse@ and postmaster@ addresses which will forward to our admins.

I don't want our IPs to get blocked, all we're doing is forwarding the received messages, but I don't really want to enable spam blocking on the forwarded mail (It's our users, not ours, it should be up to their onbound email provider to detect spam and put it in a 'spam' folder, the only option we have is to delete spam messages but if we start getting it wrong and people start getting upset that we're not delivering their emails we could get in trouble with them)

One option is frequently (maybe once a month) get new IPs for the servers so they should be clean (We can get them for about $2/year per IP so no huge cost to us, but we end up with a lot of 'soiled' ips lying around which our hosting company won't take back). Are there any other suggestions for what we can do so we don't get blocked by email providers?

Edit:
We'll be forwarding between 15-20k emails a day to ~10k different accounts. We're currently using OpenSRS' email forwarding service but it's too expensive now we've got to this level of users

email spam
  • 2 Answers
  • 355 Views
Martin Hope
Smudge
Asked: 2011-06-15 08:42:12 +0800 CST

My server is working fine, but I can't ping it

  • 9

I noticed earlier, one (of many) of our EC2 instances doesn't respond to ping requests. Everything else runs fine, SSH, HTTP, FTP, Database all working perfectly, but ping fails.


This instance is based on an image we use for about 40 nodes on EC2, and I don't remember ever having this issue before. I noticed because our main 'is it up' check for each server in NAGIOS uses Ping so I noticed it.

Functionally, it's not a problem (Just started another instance and that one worked fine), but for my education (and just because I was interested), why won't ping work whenever other services will?

Sam-Rudges-MacBook-Pro:~ sam$ curl -i http://50.19.x.x/
HTTP/1.1 302 Found
Date: Tue, 14 Jun 2011 16:38:36 GMT
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
Content-Length: 0
Location: /dash
Server: TornadoServer/1.2.1

Sam-Rudges-MacBook-Pro:~ sam$ ping 50.19.x.x
PING 50.19.x.x (50.19.x.x): 56 data bytes
Request timeout for icmp_seq 0
Request timeout for icmp_seq 1
Request timeout for icmp_seq 2
Request timeout for icmp_seq 3
Request timeout for icmp_seq 4
^C
--- 50.19.x.x ping statistics ---
6 packets transmitted, 0 packets received, 100.0% packet loss

(Blanked out the IP addresses, but they're the same)

amazon-ec2 ping
  • 1 Answers
  • 25796 Views
Martin Hope
Smudge
Asked: 2011-06-15 00:18:45 +0800 CST

Nagios remote monitoring: NRPE Vs. SSH

  • 10

We use Nagios to monitor quite a few (~130) servers. We monitor CPU, Disk, RAM and a few other things on each server. I've always used SSH to run the remote commands, purely because it requires little to no additional config on the remote server, just install nagios-plugins, create the nagios user and add the SSH key, all of which I've automated into a shell script. I've never actually considered the performance implications of using SSH over NRPE.

I'm not too bothered about the load hit on the Nagios server (It's probably over-speced for what it does, it's never been over 10% CPU), but we run each remote check every 30 seconds and each server has 5 different checks performed. I assume SSH requires more resources for each check but is there a huge difference? (I.E. enough of a difference to warrant the switch to NRPE).

If it's any help, we monitor a mix of physical servers (Normally with 8, 12 or 16 physical cores) and Amazon EC2 medium/large instances.

performance monitoring nagios nrpe
  • 4 Answers
  • 12818 Views
Martin Hope
Smudge
Asked: 2011-06-05 06:17:52 +0800 CST

BIND HTTP "API"

  • 7

First off, I'm a bit of a bind n00b so if I say things that don't make sense just ignore them =)

Is there any software that would allow "API" like commands for creating, updating, deleting etc. zones and records on BIND?
I have two DNS servers running on EC2 and I want to be able to easily manage domains on them from another app. I know something like CPanel has a HTTP API and interfaces with BIND but that's probably overkill for what I need. I don't mind installing Apache/PHP/Mysql/Python/Rails/Whatever else needed to get it to work but all the servers will be doing is DNS

EDIT: Or get BIND to use MySQL for storing it's config, then I could just write a simple PHP script to do the 'API' bits

domain-name-system mysql bind
  • 4 Answers
  • 3839 Views
Martin Hope
Smudge
Asked: 2011-05-29 02:19:21 +0800 CST

"Cloud" file storage Vs. Self-Managed Servers (What's the big deal with cloud?)

  • 10

So I'm just going to jump right in to the maths; My web host offers a server with 5x2TB drives, unmetered 1GB/s connection and can provision servers in 5 different data-centers, for ~$200/month.

If I got 3 of these servers and install OpenStack Swift on them, I have ~10TB (I know it'll be less than that but to keep the maths simple I'll just say 10TB) of storage with the same features as Rackspace Cloud but for $0.058/GB (Compared to $0.15/GB for Rackspace). Plus free, unlimited bandwidth. The servers could be provisioned in 3 different DCs for redundancy and for scaling new servers are active within an hour so we could scale up our storage reasonably quickly if we needed to. We'd also be using a CDN to deliver content, so yes there would be bandwidth charges but they would be external so irrelevant to the question.

Obviously it's only more effective if working with large amounts of storage (For, say 2GB it's a lot less efficient) but we have 7.5TB of backups on a RSC files account so our effective price per GB would be $0.078 (But that would decrease) compared to over $1000/month with our current Rackspace system.

So my question is, other than having to manage our own servers and putting a bit more effort into scaling, what's the difference between a self-managed storage system and something like Rackspace cloud, is it worth $/GB difference+bandwidth charges?

cloud-storage rackspace
  • 3 Answers
  • 711 Views
Martin Hope
Smudge
Asked: 2011-05-12 04:26:54 +0800 CST

Hardware Firewall Vs. Software Firewall (IP Tables, RHEL)

  • 37

My hosting company says IPTables is useless and doesn't provide any protection. Is this a lie?

TL;DR
I have two, co-located servers. Yesterday my DC company contacted me to tell me that because I'm using a software firewall my server is "Vulnerable to multiple, critical security threats" and my current solution offers "No protection from any form of attack".

They say I need to get a dedicated Cisco firewall ($1000 installation then $200/month each) to protect my servers. I was always under the impression that, while hardware firewalls are more secure, something like IPTables on RedHat offered enough protection for your average server.

Both servers are just web-servers, there's nothing critically important on them but I've used IPTables to lock down SSH to just my static IP address and block everything except the basic ports (HTTP(S), FTP and a few other standard services).

I'm not going to get the firewall, if ether of the servers were hacked it would be an inconvenience but all they run is a few WordPress and Joomla sites so I definitely don't think it's worth the money.

firewall redhat networking iptables
  • 6 Answers
  • 11597 Views

Sidebar

Stats

  • Questions 681965
  • Answers 980273
  • Best Answers 280204
  • Users 287326
  • Popular
  • Answers
  • Marko Smith

    Can you pass user/pass for HTTP Basic Authentication in URL parameters?

    • 5 Answers
  • Marko Smith

    Ping a Specific Port

    • 18 Answers
  • Marko Smith

    Check if port is open or closed on a Linux server?

    • 7 Answers
  • Marko Smith

    How to automate SSH login with password?

    • 10 Answers
  • Marko Smith

    How do I tell Git for Windows where to find my private RSA key?

    • 30 Answers
  • Marko Smith

    What's the default superuser username/password for postgres after a new install?

    • 5 Answers
  • Marko Smith

    What port does SFTP use?

    • 6 Answers
  • Marko Smith

    Command line to list users in a Windows Active Directory group?

    • 9 Answers
  • Marko Smith

    What is a Pem file and how does it differ from other OpenSSL Generated Key File Formats?

    • 3 Answers
  • Marko Smith

    How to determine if a bash variable is empty?

    • 15 Answers
  • Martin Hope
    Davie Ping a Specific Port 2009-10-09 01:57:50 +0800 CST
  • Martin Hope
    Smudge Our security auditor is an idiot. How do I give him the information he wants? 2011-07-23 14:44:34 +0800 CST
  • Martin Hope
    kernel Can scp copy directories recursively? 2011-04-29 20:24:45 +0800 CST
  • Martin Hope
    Robert ssh returns "Bad owner or permissions on ~/.ssh/config" 2011-03-30 10:15:48 +0800 CST
  • Martin Hope
    Eonil How to automate SSH login with password? 2011-03-02 03:07:12 +0800 CST
  • Martin Hope
    gunwin How do I deal with a compromised server? 2011-01-03 13:31:27 +0800 CST
  • Martin Hope
    Tom Feiner How can I sort du -h output by size 2009-02-26 05:42:42 +0800 CST
  • Martin Hope
    Noah Goodrich What is a Pem file and how does it differ from other OpenSSL Generated Key File Formats? 2009-05-19 18:24:42 +0800 CST
  • Martin Hope
    Brent How to determine if a bash variable is empty? 2009-05-13 09:54:48 +0800 CST
  • Martin Hope
    cletus How do you find what process is holding a file open in Windows? 2009-05-01 16:47:16 +0800 CST

Related Questions

Trending Tags

linux nginx windows networking ubuntu domain-name-system amazon-web-services active-directory apache-2.4 ssh

Explore

  • Home
  • Questions
    • Hot Questions
    • New Questions
  • Tags
  • Help

Footer

SnapOverflow

About Us

  • About Us
  • Contact Us

Legal Stuff

  • Privacy Policy

Help

© 2022 SOF-TR. All Rights Reserve