One of the domains on my webserver is being used for serving static files. All sites I have found when talking about forcing SSL, talk about editing your web.config file. I don't have that for this site. How would I force all HTTP traffic to HTTPS? Do I create a web.config file with the code in the root of the site?
Mike Wills's questions
I work for a local government agency where we manage our own mailing list for pushing out news and information for our citizens. I try to stay proactive on removing email addresses that no longer exist and dealing with any spam blacklistings. However, there is one non-delivery report that is baffling me. Here is a screen shot:
The blurred out areas indicate our email server.
As you can see, there isn't any email address in there that tells me where the issue resides. I get several of these every time time we send out another news release everything is the same, but the date, thread index, and message id. I have been unable to find any of this in the email server's logs. How do I figure out what email address this relates to so I can remove it?
For context, the emails are generated on our web server using another email server software. Our primary email server is running Exchange. I have talked to our email server admin about this problem, and they don't see anything in their logs.
Update: Here is another type I get with the same problem. I understand what the error means, but how do I know what that domain is?
I work for a local government that bills for water usage and garbage collection. I received a call today from a customer that a "New York company" called him asking for his customer number and pin to give them access to his online account so they can "scrape" his water usage from the site. They are apparently collecting that information from several apartments for some reason that he couldn't recall. Of course this raises flags with me and I informed him to not give them this information. I also told him that if they call back to tell this company that they can contact us directly to get this information if they require it.
Now, if one of our customers received a call, I feel it's safe to assume that others have gotten the same or similar call and they may or may not have given this information. How can I check our logs to see if there is a bot hitting our site and screen-scraping our data? I also feel we should block that bot and prevent further attempts.
Note: The only information stored on the web server is the name and address of the customer, water usages and costs of the bills with a total amount due. They can also pay the bill. We don't store any account information online. So overall, the information on the web server could be considered public information (though the proper channels).
I am running a web server and now I want to be able to use another device that also requires port 80, but I only have a single IP. My web server is Ubuntu and uses Apache. Is there a trick to reroute requests to a certain internal IP based on a domain name? How would I do that?
I am running a Ubuntu web server. I have a backup script working that will dump mysql and svn and backup both of those along with the files in the /var/www folder and store it on S3. What else should be backed up?
Currently, I am running the following:
- Apache
- MySql
- Subversion
Eventually I may play with other things.
I have found tutorials in the past for creating self-signed certificates. Now for my personal website (on a virtual server), I want to use a self-signed SSL certificate for logging into my WordPress admin panel. The problem is that I am running multisite; so each site has a different hostname (domain.com, site2.domain.com, site3.domain.com) for the admin panel. Can I create a self-signed certificate that can protect all of the subdomains as well as the root? How do I do that?
Yes, Apache and OpenSSL. The instructions I found are at: https://help.ubuntu.com/10.04/serverguide/C/certificates-and-security.html
We currently have our web server in a DMZ. The web server cannot see anything within the internal network, but the internal network can see the web server. How safe would it be to punch a hole in the firewall between the DMZ and the internal network to only one web server in the intranet? We are working on something that will be interfacing with several of our back-office applications (which are all on one server) and it would be so much easier to do this project if we could communicate directly with the IBM i server holding this data (via web services).
From my understanding (and I don't know brands), we have one firewall for the DMZ with a different external IP from our primary IP with another firewall. Another firewall is between the web server and the intranet.
So something like:
Web Server <==== Firewall ===== Intranet
| |
| |
Firewall Firewall
| |
| |
Internet IP1 Internet IP2
Sorry if this has been talked about before and please point me to those references. I am now running my own Ubuntu web server however I have been hitting problems with authorities and FTPing up files. Right now I have all of my web stuff under /var/www. But every time I do something there, I then have to do a chown www-data:www-data
to make sure everything keeps working properly. In reality, I want to be able to FTP into my server upload what I want and just have it work without having to worry about authorities every time.
Should I have put everything under /home/user/public_html/mydomain1.com and home/user/public_html/mydomain2.com? What authorities should that then have? user:www-data
? user:user
?
Do I need to make any changes to the Apache config?
Update: How do I setup proFTP to make sure that I can access /var/www if that is the route I continue one?
I have an AWS EC2 instance running Ubuntu 10.10. I just migrated over my WordPress Multisite install and have that working.
With that I have a *.domain.com as an alternate server name on my Virtual Server for domain.com so that it works for any additional blog I add beyond the ones I have now. The problem is that I have two subdomains that I don't want WordPress to handle. One works media.domain.com the other doesn't sub.domain.com.
Any ideas on what I can do to get this to work?
All of this has been setup using Webmin.
On our mailing list, we have been getting the following message Maximum failsafe period has expired
. What does that mean?
In searching the ServerFault for situations similar to mine, I can't find one.
We have a "training" and a live database for an application. The "training" database is used for development and experimentation. I frequently backup the live database to a bak file then restore that database to the "training" database to refresh the data. (There is no issues in doing this as it is vendor recommended.) Could I eliminate that step by restoring from the live database to the "training" database?
Restore from Live http://dl.dropbox.com/u/2732434/CopyDatabase_hide.png
How do you lock out the USB ports on the desktop PCs so we can prevent usage of USB drives on the desktops.
I should clarify that these are Windows XP desktops.
We should also assume that like most of the new desktops, many are using USB for keyboards and/or mice.