I have two servers behind a router and a domain name with a wildcard certificate. (*.example.com) The certificate is from Let's Encrypt and is managed by Server A. (Uses IIS on Windows 10.)
The other server B uses Ubuntu 20 with Apache2.
Server A looks at port 80 and 443. Server B looks at port 8080 and 8443 and is just for testing purposes. Port forwarding on my router will send incoming requests to the proper server.
But because of this setup, I am unable to request a Let's Encrypt certificate with server B. And why would I, as server A already has the wildcard certificate that I can use. So all I need is some simple automated process that can copy the certificate from A to B so both servers share the same certificate. Makes sense?
I'm not looking for port forwarding as both servers use the same domain name (and similar subdomains) so there's no use for that. I'm also not looking for a manual solution as this needs to be automated in an easy way. The systems are used as development environment for applications and sites that I develop and the reason for sharing the certificate is to simply allow outside testers to do a complete test of what I make, no matter if it runs on IIS/Windows or Apache2/Linux.
Wim ten Brink's questions
Cutting the fluff: Wildcard Host Header Support, is that possible in some way in IIS 8 for subdomains?
I know, it's a bad design which is why I keep the original question below to explain why...
I have two domains, example1.com and example2.com and I want both of them hosted on a simple server running Windows 2012 with IIS 8. I also have just one IP address and things are a bit challenging. However, this is not a production environment but a test environment for me to test complete website projects before they're delivered to a production environment. Then it's done and I start a new project with a new domain.
So I've set up one site in IIS and any incoming traffic goes to that site.I have a second site set up with binding for the second domain and that generally works fine when dealing with many different domains. But now I have a challenge.
I am now working on two projects that both heavily rely on the use of subdomains for the users. So for every user on the site there has to be a specific subdomain. Customer wants this so don't tell me it's a bad idea. Customer is demanding. (And horribly enough, I'm now working on two sites that need this. One would be easy.) So both sites now need to accept all subdomains for a specific domain name. So while most would go to example1.com the url wim.example2.com should go to the second website. And while that would be easy with a second IP address, I just have one.
So how do I get all subdomains go to the right site in IIS 8?
As a software developer I have a simple web server (Windows 2012/IIS) which I use for two different domains. In general, I have a basic "Default" site as a catch-all and various other sites for specific domains or subdomains. Those specific sites are projects that I'm working on while the default site is just a generic toolbox. My question is about redirecting just some domains to HTTPS on this default site!
So I use a Let's Encrypt SSL certificate for this default site and 'Certify the Web' to keep the certificate uptodate. Works fine, as long as I bind the specific domain name to this site. But as I said, it is a catch-all site so I might have bound example.com but www.example.com or my.example.com are unbound. And as there are many other subdomains possible, I tend to have way more pages without SSL than with SSL. Which is fine! No problem.
But when someone goes to the site for a domain that is bound, I want them to be redirected to the secure site instead. So if they visit http://example.com they would end up at https://example.com. And while I could force IIS to redirect all requests to the secure version, doing so will result in most of the domains not being found on my server, as they're unbound. So I only want redirections for sites bound to this default site.
There is an alternate solution, which is by creating a second default site. This one would force SSL to be enforced so the bound sites would be secure. But this is a small server for all kinds of projects and I don't want yet another site in the list of 40 sites that I already have. (And this list tends to grow even more.) So I want to know if it is possible to force SSL just for the bound sites while the unbound ones will not be redirected.
(In case you're wondering, there will be some special logic behind the default site and the secure domains will have a login feature for me to handle some additional administrative tasks like checking the logs and server health. It's not a simple default site!)
I have set up a simple Windows 2012 webserver and developed a ASP.NET/MVC website that basically handles any requests to the default website and writes a message to a log database if the domain is unknown by my server. (Known domains are handled by a different site on this server.) This is useful for me when i register a new website, as it will generate a default landing page after the DNS data is set and has become active. It also helps to see if people misspell a subdomain of one of my sites. And I would expect that I would only see domains that I have registered myself.
This has been running for a few days now...
However, at this moment two domains seem to have been connected to my server that I don't own. Or even know. (kkqxjc.loan and quwan18.com) And I suspect more unknown domains will seem to be connected to my site for whatever reason. Question is: Why do I get these unknown sites to connect to my server?
Could it be a flaw in the DNS system? Or is someone actively trying to hack my server by fooling the web server software in some way? Does anyone else have similar situations? Are they just phishing for access to the default site on the server?
Note: while the site is new, I have been using the server and IP address for over 8 years already. It's just recently that I decided to build a specific site as a catch-all for all incoming internet traffic.
Since posting this question I've noticed two more domains that seem to go to my server. (gencybercamps.org and 360xdw.net) Each domain is queried just once and don't seem to go to my server anymore. Each unknown domain is visited just once but the same visitors have been accessing a few other domains that I do operate from this server. This seems to suggest to me that this is a hacking attempt where they try to gain access to the default virtual server on various systems.
I have a web server which supports only the company's Intranet, not the external Internet. There will be about 100 different users who will connect to a web application on this server and they all have their own login accounts. Currently, the web application doesn't support HTTPS because of some design flaws, which take time to fix. So I have two options: 1) Delay installation of the application until it does support HTTPS. 2) Just install the thing because there isn't a huge risk, since it runs on the internal network only. In time, it could be patched and moved to a secure server.
The data managed by the web application has a privacy element, since it's mostly customer information. (But no creditcard or bank account data.) It's still considered sensitive information, though. But not sensitive enough to defend at all costs. And although there would be one or two users with some technical knowledge about computer hacking, most users are not IS specialists but just generic users. (All users use a regular user account, not administrator accounts. Installing additional software by users is made [almost] impossible.)
So, is there a big risk when I install this application internally on a less secure connection?
Add-on: We don't support WLAN. Visitors can plug in to the Network but still won't have access to the Intranet environment. I'm not an expert on this topic, but as I understood, the machine needs to be part of our domain before it can access the Intranet environment. (This has been tested!) Since users can't install additional software, it would be difficult for them to install additional hacker tools. Every system also has a good virus-scanner active which is kept up-to-date. Plus, all outgoing Internet connections will have to pass an additional proxy server which does a few additional checks. The users know that their Internet behavior to outside sites is monitored. And although any computer might be hacked, the system isn't sensitive enough to go to panic mode and make sure everything is secure. Worst-case scenario would be that a hacker deletes all data, which means we have to restore a backup. Or he gets a list with snailmail addresses of customers, their jobs and some financial information but no references to bank accounts or other account information.
Add-on2: I'm asking this simply because the regular administrator is unavailable at this moment. (He had an accident due to the bad weather conditions in the Netherlands. It's not critical but it takes a while for him to recover.) The application is new and will have no data once installed. I was told that it could take up to 2 months before he recovered well enough to continue his work so the decision is mainly about installing it without his support so it can be used already or to wait until he returns, which would probably delay things for as long as he's away or until we've found and trained a qualified replacement.
Technically, this is not a problem I need to solve. I'm developing a project where an application connects directly to an MS Access database. The customer suggested to migrate the database to their SQL Server instance. The application would then use a different connection string to connect straight to the SQL Server database instead of MS-Access. A simple test-run proved that this would work, with two users connected to the same database.
I don't know which version of SQL Server the customer is really using but I assume they know what they're doing. Right now, I'm adding some additional code changes and I can test the application on SQL Server Express 2005 with up to 5 users. But the customer will have over 500 users who will all use the same application at the same time, thus there will be 500 direct connections to the database.
I can't convince the customer to pay some extra for a better client/server model of this project. It would also require some additional development time. And I'm unfamiliar with problems that could arise with this many users on a single SQL Server database. So I feel as if I'm the captain of the Titanic and I've just spotted an iceberg...
So, will it sink? What are the risks of this setup which this customer is overlooking? Or is there no risk worth mentioning? (The customer is experienced with SQL Server so it's mostly me who is having doubts.)
This is a question that just popped into my mind and I can't help but wonder why it's still common for a Windows installation to be installed on C: with all other drive letters going up from D: to Z:. In the early MS-DOS times, all we had were floppy disks and they were at A:. When the 3.5 inch floppy started to replace the 5.25 floppy, many people had an A: and B: drive. Then the hard disk became popular and the hard disk was at C: because A: and B: were taken. Then the 5.25 floppy disappeared and most computers had a gap between A: and C:. Nowadays, the 3.5 floppy is just too outdated so A: disappeared too. All disks now start at C:.
Yeah, I know I can assign my own drive letters and I've done so with my data disks. My installation disk will just continue to be stuck at C: and I don't really mind. I have no problems with drive letters.
But why do the new Windows versions just continue to install themselves by default on C: instead of assigning the letter A: to the boot hard disk?