Is it okay if we set the same HELO name on multiple MSA servers? Or should they be unique and include the actual server's hostname?
For a small number of reasons, we can't simply have our applications use a single MSA.
Is it okay if we set the same HELO name on multiple MSA servers? Or should they be unique and include the actual server's hostname?
For a small number of reasons, we can't simply have our applications use a single MSA.
We have a failover setup for our webserver, where we mirror our live webserver in case of some kind of hardware or connectivity failure. At the moment, the process requires us to update the DNS, which obviously is at the mercy of propagation and caching. Propagation these days is pretty quick, but it seems that caching is becoming more sticky.
We haven't bothered going to a more complicated load balancing solution because there hasn't been any reason to do an unexpected switch from live to failover in the almost five years since we implemented the setup. The live server hardware is high spec and robust, using RAID for storage, and is co-located in a big datacenter with huge diesel generators and multiple big fat internet pipes. Even the massive statewide power outage we had last year didn't affect our server and sites.
But we do use the failover for planned outages - like performing site updates that occur once every 6 months. We make the switch in our DNS, and then have to wait for traffic to stop on the live server so we don't disrupt our users too much while updating.
So, is there a way for Apache HTTPD (some kind of HTTP header) to tell users' browsers that they should flush their DNS cache for our domain?
We're running SQL Server 2012 on Windows Server 2008 R2 for a our CRM and ERP services.
Unfortunately, the majority of the setup was outsourced to an "expert" company who neglected to provide any documentation. For this and various other reasons, this company was dropped and replaced. Bridges were burnt, so there's no real chance of getting any more information from them.
We're experiencing some quite laggy behaviour with the server. It isn't heavily loaded in terms of CPU and memory, and only deals with the odd request from the CRM and ERP clients.
The laggy behaviour seems to stem from the fact that SSIS is reading megabytes of data each second from the hard drive.
https://en.wikipedia.org/wiki/SQL_Server_Integration_Services
http://www.ssistalk.com/2009/11/04/ssis-what-does-the-ssis-service-actually-do-anyway/
From what I can find - SSIS is there to help import packages that were developed to run in the SQL Server. I've found a few articles about excessive write operations due to SSIS, but nothing about excessive reads.
I'm also trying to figure out if it is safe to remove SSIS.
We temporarily moved some CNAME records in our DNS to point to a different server while undertaking maintenance work - we do this every six months or so - and we allow a decent amount of time to allow for propagation.
Everything seemed fine - users were being sent to the correct server, and when I used nslookup to test our primary and secondary DNS, the CNAME data was correct for all the domains we were redirecting. However, when using the service at whatsmydns.com, we were being told by each DNS the service polled that there was an "error: token mismatch" from all servers.
Now that the maintenance work has been completed, the CNAME records have been returned to their original values - nslookup and whatsmydns.com all return these expected values.
I've tried searching for "error: token mismatch" - but all I can find are product/service support forums where the response is simply "your website seem fine", and don't actually identify or discuss what a token mismatch is in terms of DNS resolution.
So - what is an "error: token mismatch" in this context?
We've got a few subdomains setup, and I want to simplify some apache configuration - particularly as these particular subdomains are volatile. Anyway, I'm trying to get a rewrite rule that include the host.
So far, I've got:
RewriteEngine On
RewriteCond %{HTTP_HOST} ^(.*).net.au$
RewriteCond %{HTTPS} off
RewriteRule ^(.*)$ https://%1.net.au$1 [R=302,L]
This is based on https://stackoverflow.com/questions/3634101/url-rewriting-for-different-protocols-in-htaccess and http://httpd.apache.org/docs/2.4/rewrite/vhosts.html
However, the pattern in the first condition isn't written into the rewritten address, and I end up being sent to .net.au/wherever (wouldn't let me post with the https in front) - have I missed something?
Running apache 2.4 on Ubuntu 14.04
I want to copy all gzipped files from my apache logs that were created less than 43 days ago.
As a test, I simply listed my files from find:
sudo find /var/log/apache2/ -mindepth 1 -ctime -43 -name "*.gz" -ls
But the results are including files created all the way back into August (when the server was setup) as well as newer files:
8781890 4 -rw-r----- 1 root adm 186 Aug 10 06:44 /var/log/apache2/error.log.13.gz
8781923 4 -rw-r----- 1 root adm 1717 Aug 17 06:29 /var/log/apache2/error.log.12.gz
stat /var/log/apache2/error.log.13.gz
File: `/var/log/apache2/error.log.13.gz'
Size: 186 Blocks: 8 IO Block: 4096 regular file
Device: 807h/2055d Inode: 8781890 Links: 1
Access: (0640/-rw-r-----) Uid: ( 0/ root) Gid: ( 4/ adm)
Access: 2014-11-13 10:34:14.444059675 +1030
Modify: 2014-08-10 06:44:11.000000000 +0930
Change: 2014-11-09 06:29:48.035930468 +1030
Why is the ctime argument not applying?
I'm trying to install a wildcard certificate (*.my.example.com) to an iDRAC6 - this cerificate was provided by RapidSSL (a subsidiary of GeoTrust) and is used across our ubuntu/apache webservers and other things, but I'm having trouble deploying it to an iDRAC which needs it.
So far, I've uploaded the key and certificate using Dell's idracadm software on Ubuntu:
> sudo /opt/dell/srvadmin/bin/idracadm -r idrac.my.example.com -i sslkeyupload -t 1 -f wildcard.my.example.com.key
< SSL key successfully uploaded to the RAC.
> sudo /opt/dell/srvadmin/bin/idracadm -r idrac.my.example.com -i sslcertupload -t 1 -f wildcard.my.example.com.crt
< Certificate successfully uploaded to the RAC. The RAC will now reset
< to enable the new certificate and may be offline temporarily.
I wait a short time, then try to use sslcertview to confirm the certificate, but I get:
< Security Alert: Certificate is invalid - unable to get local issuer certificate
I also get security warnings in Firefox - saying the issuer is unknown (but the rest of the details seem correct). So, I'm trying to get the CA certificates for RapidSSL and GeoTrust using:
> sudo /opt/dell/srvadmin/bin/idracadm -r idrac.my.example.com -i sslcertupload -t 2 -f intermediate.crt
But I get told the upload was unsuccessful as this certificate is invalid.
I've also created a PKCS#12 file with the key, certificate, and intermediates to upload, but this is not being upload either.
I'm using iDRAC7 with firmware 1.57.57
How do I resolve this?
Setting up a new webserver in Ubuntu 14.04 and trying to wrangle file permissions for PHP generated files.
By default, all the directories and files in /var/www
are owned/grouped to www-admin. Directory permissions are rwxrwsr-x
and file permissions are rw-rw-r--
.
We then set the group on a limited number of directories to www-data - this is so that PHP (via Apache) can write log and cache files in this location.
However, I cannot get PHP to obey a umask of 0002
, and so files generated by PHP are only writeable to the www-data user. This is a problem, since we use continuous integration, and some other cleanup processes.
So far, I have:
/etc/pam.d/common-session
/etc/pam.d/common-session-noninteractive
/etc/profile
/etc/apache2/envvars
/etc/login.defs
www-data
in /etc/passwd
using sudo chfn -o "umask=002" daemon_username
And I'm still stuck.
I've stopped/started the service, and even restarted the computer - no joy.
We're restructuring our network, and configured our router/dhcp server as follows:
# public internet connection
auto eth0
iface eth0 inet static
address 192.168.10.10
netmask 255.255.0.0
gateway 192.168.0.1
# intranet
auto eth1
iface eth1 inet static
address 10.0.0.1
netmask 255.128.0.0
# DHCP relay for WLAN
auto eth2
iface eth2 inet static
address 10.254.0.1
netmask 255.255.0.0
# DHCP relay for VPN
auto eth3
iface eth3 inet static
address 10.253.0.1
netmask 255.255.0.0
The DHCP server is set to listen on eth1, eth2 and eth3, and configured with the following scopes:
subnet 10.0.0.0 netmask 255.128.0.0 {
range 10.100.0.0 10.100.255.255;
option routers 10.0.0.1;
}
subnet 10.254.0.0 netmask 255.255.0.0 {
range 10.254.1.0 10.254.255.255;
option routers 10.254.0.1;
}
subnet 10.253.0.0 netmask 255.255.0.0 {
range 10.253.0.0 10.253.255.255;
option routers 10.253.0.1;
}
So - wireless devices connect to our Wi-Fi routers. Each Wi-Fi router is statically address (example, 10.254.0.2 with the gateway set to 10.254.0.1). However, the devices connecting via Wi-Fi are getting VPN addresses (10.253.0.0).
Also - desktop computers, which I want to be in the 10.100.x.x range are also getting VPN addresses.
This is on Ubuntu 12.04.
All the documentation I've found out on the internet indicates that this is correctly configured - but I figure that I must have missed something.
While trying to resolve a ports issue on a staging server, I happened to run an nmap on our production server, and found that port 800 was open:
800/tcp open mdbs_daemon
I have no idea what mdbs_daemon is - I have googled the port number and the daemon, but can only find a discussion hinting that it is something to do with NFS, but I'm pretty sure I've not installed anything related to that on my Ubuntu 10.04 production server.
I'm just not sure how concerned I should be at this unexpected port.
We recently powered down a Windows 2008 server for a weekend as the building power supply was being worked on.
Right up until this shutdown, the backups have been happily running daily every day since they were first configured.
Today, we realised that the backups haven't run since the last night before the shutdown, and when we checked the backup scheduler it said there were no backups.
Why would this have happened?
Addendum: I can't be sure that this was the first time we powered down the server since setting up the backups.
We've been migrating about 220 GB of data from a Windows 2003 Server to a Windows 2008 Server, and because of the time it would take to copy that data and the necessity of keeping it available for users, I came up with the idea of using rsync
on an Ubuntu server to broker the migration. (I might have gone for a proper Windows solution - but the applications I found were a bit pricey for a one-shot like this - and permissions are not a problem).
All well and good - and today I'm making the last sync and confirming that the new server is up-to-date using diff, but I"ve noticed an odd thing with Excel spreadsheets (.xls).
Every instance of an Excel spreadsheet that has already been copied in a previous in a previous synchronisation is being marked as "already up-to-date" by rsync. However, when I then run a diff, I'm told that the files differ. I'm manually copying them, as there are but a handful, but I was wondering what might be causing this.
No other filetype in the entire 220 GB tree has had any problem like this - just the Excel/xls files. It'd be great if someone could come up with an explanation.
We're running a Windows DNS (Server 2008, Active Directory) on our network. I have created domains under our zone for each of our web developers, and configured an A record under that domain.
While I can easily add CNAME records for sub-domains under a developer domain, I'd like the developers to be able to configure those for themselves - that way, they can generate new CNAMEs for new projects when they need to without bothering me.
For example:
Zone
- company.supernet.net.au
Developer domain
- frank.company.supernet.net.au
Developer subdomain (entered as CNAME for developer domain)
- redesign.frank.company.supernet.net.au
- project.frank.company.supernet.net.au
Is there an easy way to allow this? I tried searching, but all I get is instructions on how to create a CNAME from the DNS administration directly. I don't want to give all of our developers full access to the server if it can be avoided.
I'm putting a new router on our network to help manage our private network connections and to better control routing to the outside. I've decided to simplify this question by removing references to a second private network on our router (but please be aware that there are reasons for this question so the answer is not to leave the old router the default gateway).
I have the following routing in my iptables on our router:
# Allow established connections, and those !not! coming from the public interface
# eth0 = public interface
# eth1 = private interface #1 (129.2.2.0/25)
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -m state --state NEW ! -i eth0 -j ACCEPT
iptables -A FORWARD -i eth0 -o eth1 -m state --state ESTABLISHED,RELATED -j ACCEPT
# Allow outgoing connections from the private interface
iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT
# Masquerade (NAT)
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
# Don't forward any other traffic from the public to the private
iptables -A FORWARD -i eth0 -o eth1 -j REJECT
This configuration means that users will be forwarded through a modem/router that has a public address - this is all well and good for most purposes, and in the main it doesn't matter that all computers are hidden behind the one public IP.
However, some users need to be able to access a proxy at 192.111.222.111:8080 - and the proxy needs to identify this traffic as coming through a gateway at 129.2.2.126 (the old router) - it won't respond otherwise.
I tried adding a static route on our router with:
route add -host 192.111.222.111 gw 129.2.2.126 dev eth1
I can successfully ping 192.111.222.111 from the router. When I trace the route, it lists the 129.2.2.126 gateway, but I just get *** on each of the following hops (I think this makes sense since this is just a web-proxy and requires authentication).
When I try to ping this address from a host on the 129.2.2.0/25 network it fails.
Should I do this in the iptables chain instead? How would I configure this routing?
Network diagram
Here is the interfaces configuration for the router:
auto eth0
iface eth0 inet static
address 150.1.2.2
netmask 255.255.255.248
gateway 150.2.2.1
auto eth1
iface eth1 inet static
address 129.2.2.125
netmask 255.255.255.128
And here is the routing table (without my static route added):
Destination Gateway Genmask Flags Metric Ref Use Iface
default eth1202.sa.adsl 0.0.0.0 UG 100 0 0 eth0
localnet * 255.255.255.0 U 0 0 0 eth1
129.2.2.0 * 255.255.255.128 U 0 0 0 eth1
To restate - I want traffic from 129.2.2.7 (for example) to now route through our router (129.2.2.125). But this router needs to then forward 8080 requests with the destination of 192.111.222.111 - which is somewhere the other side of the old router (129.2.2.126 - which is not manageable by us).
I have an unfortunate complication in my network - some users/computers are attached to a completely private and firewalled office network that we administer (10.n.n.x/24 intranet), but others are attached to a subnet provided by a third party (129.n.n.x/25) as they need to access the internet via the third party's proxy.
I have previously set up a gateway/router to allow the 10.n.n.x/24 network internet access:
# Allow established connections, and those !not! coming from the public interface
# eth0 = public interface
# eth1 = private interface
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -m state --state NEW ! -i eth0 -j ACCEPT
iptables -A FORWARD -i eth0 -o eth1 -m state --state ESTABLISHED,RELATED -j ACCEPT
# Allow outgoing connections from the private interface
iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT
# Masquerade (NAT)
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
# Don't forward any other traffic from the public to the private
iptables -A FORWARD -i eth0 -o eth1 -j REJECT
However, I now need to enable access to users on our 129.n.n.x/25 subnet to some private servers on the 10.n.n.x/24 network.
I figured that I could do something like:
# Allow established connections, and those !not! coming from the public interface
# eth0 = public interface
# eth1 = private interface #1 (10.n.n.x/24)
# eth2 = private interface #2 (129.n.n.x/25)
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -m state --state NEW ! -i eth0 -j ACCEPT
iptables -A FORWARD -i eth0 -o eth1 -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -i eth0 -o eth2 -m state --state ESTABLISHED,RELATED -j ACCEPT
# Allow outgoing connections from the private interfaces
iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT
iptables -A FORWARD -i eth2 -o eth0 -j ACCEPT
# Allow the two public connections to talk to each other
iptables -A FORWARD -i eth1 -o eth2 -j ACCEPT
iptables -A FORWARD -i eth2 -o eth1 -j ACCEPT
# Masquerade (NAT)
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
# Don't forward any other traffic from the public to the private
iptables -A FORWARD -i eth0 -o eth1 -j REJECT
iptables -A FORWARD -i eth0 -o eth2 -j REJECT
My concern is that I know that the computers on our 129.n.n.x/25 subnet can be accessed via a VPN through the larger network operated by the provider - therefore, would it be possible for someone on the provider's supernet (correct term? inverse of subnet?) to be able to access our private 10.n.n.x/24 intranet?
I've recently built a server and decided to use UFW for the first time. I was investigating a possible connection issue (turns out it was their end) but I noticed that the logs are full of entries saying that traffic was blocked on port 80 and port 443 - rather worrying for a web server. Checking ufw status confirms that all traffic on these ports is allowed - additionally, we haven't had anyone report problems connecting to the server.
I found this other question: UFW logs blocked request on open port, what am I missing? - it set my mind at ease, but I'd prefer to be able to not have these "FIN ACK" messages in my logs so I can discern legitimate entries more clearly.
Other than simply piping through grep, is it possible to selectively filter entry into the log from a UFW config setting?
I've inherited "ownership" of a network which has a Windows 2003 Server running as a Domain Controller and file server (no IIS or DNS running). We also use an Ubuntu server running dnsmasq as an internal Name Server.
This setup doesn't seem to have been a problem until now - we are installing a new Windows 2008 server (new hardware and everything), and want to migrate services and data from the old server gradually. To start this process, we are trying to add the new server as a secondary controller on our domain. The server is connecting to the domain easily enough, but when we try and specify the forest for the new controller we end up with this error:
The following error occurred when DNS was queried for the service location (SRV) resource record used to locate an Active Directory Domain Controller (AD DC) for domain "DOMAIN.address.com":
The error was: "DNS name does not exist." (error code 0x0000232B RCODE_NAME_ERROR)
The query was for the SRV record for _ldap._tcp.dc._msdcs.DOMAIN.address.com
Common causes of this error include the following:
- The DNS SRV records required to locate a AD DC for the domain are not registered in DNS. These records are registered with a DNS server automatically when a AD DC is added to a domain. They are updated by the AD DC at set intervals. This computer is configured to use DNS servers with the following IP addresses:
xxx.xxx.xxx.xxx
One or more of the following zones do not include delegation to its child zone:
- DOMAIN.address.com
- address.com
- com
- . (the root zone)
I'm now thinking that the solution is to make the 2008 Server use the 2003 Server DNS instead of our Ubuntu? Is this the right solution? Are there other options? What might I and my team have missed?
I have already googled the message, and I have plenty of disk space available on the SVN server (it's about 4% usage of 150 GB).
I have noticed that when I try echo $TMPDIR
at the command prompt on the SVN server I get nothing.
What is making this a little confusing is that I only get this message from one location when I do an svn diff
(that I've tested so far) - this error is not coming up when I try from three other computers (one of which is testing against the exact same repository, the other two are different repositories on the same svn server).
About the only difference I can see is that the broken working copy is connecting to the server by an IP address where all the others are using a server name (although this resolves over DNS to the same IP Address).
I'm hoping that I don't have to scratch the broken working copy and checkout a new one - unfortunately, this is a legacy project and not all changes have been properly revisioned.
We perform nightly backups of a Windows file server by first creating an incremental backup file on the server each night (as well as a complete backup on Thursday night), and then copying that to a backup server running Linux/Ubuntu. In order to maintain off-site redundancy, we then rsync the backup directory to an external drive which is rotated after each nightly run.
Over time, the number of incremental backup files has increased at a steady pace (although, the size of these files has varied).
We've also noticed that the rsync process has steadily been taking longer, even though it is only copying the latest two files to the external drive.
This is the command:
rsync -vr --delete-before --log-file=/rsync_log.csv /backup/archive/ /mnt/usbdisk/ 2>&1
When we tested the command, and investigated the log file, we saw this:
...
2011/02/14 23:59:35 [14054] >f..T...... IncrementalBackup_2011_02_06_20.bkf
2011/02/15 00:00:45 [14054] >f..T...... IncrementalBackup_2011_02_08_20.bkf
2011/02/15 00:03:22 [14054] >f..T...... IncrementalBackup_2011_02_09_20.bkf
2011/02/15 00:04:36 [14054] >f..T...... IncrementalBackup_2011_02_11_20.bkf
2011/02/15 00:04:51 [14054] >f..T...... IncrementalBackup_2011_02_12_20.bkf
2011/02/15 00:05:06 [14054] >f..T...... IncrementalBackup_2011_02_13_20.bkf
2011/02/15 00:06:13 [14054] >f+++++++++ IncrementalBackup_2011_02_14_20.bkf
2011/02/15 00:54:32 [14054] >f..T...... Thursday_Full_Backup_2011_01_20.bkf
2011/02/15 03:24:41 [14054] >f..T...... Thursday_Full_Backup_2011_01_27.bkf
...
What we found was the time taken on each file related to the size of the file - even when skipping it (example - the full backup took about 2.5 hours to process, while the incrementals about 2-3 minutes or less).
The only file actually copied is the latest incremental file.
The only explanation we can think of is that rysnc is performing a checksum of the file - even though the documentation says it does not by default, and we have not specified the --checksum switch on the command. Surely it can't take 2.5 hours to determine the timestamp and filesize?
After having gone over the documentation, I can't find any other explanation than the checksum is being calculated. So, is there a way to be sure that checksum is disabled?
I've gotten a Shibboleth Server Provider (SP) up and running, and I'm using the TestShib Identity Provider (IdP) for testing.
The configuration appears to be all correct, and when I requested my secured directory I was sent to the IdP where I logged in and then was sent back to https://example.org/Shibboleth.sso/SAML2/POST where I am getting a generic error message.
Checking the logs, I am told:
found encrypted assertions, but no CredentialResolver was available
I have rechecked the configuration, and there I have:
<CredentialResolver type="File" key="/etc/shibboleth/sp-key.pem" certificate="/etc/shibboleth/sp-cert.pem"/>
Both of these files are present at those locations.
I've restarted apache and retried, but still get the same error.
I don't know if it makes a difference - but only a subdirectory of the site has been secured - the documentroot is publicly available.