Around 2000-2010, shared hosting was extremely popular as a cheap solution (sometimes a few $ / month, or sometimes even free for just a few MB) for people starting blogs, small websites, e.g. using Wordpress.
There was usually:
- just Apache + PHP + MySQL
- no SSH, only (s)FTP access
- something like 100 MB
- as far as I remember, they probably didn't create a new virtual machine for each account
Question: before containerization / Docker went popular, how did major shared hosting providers ensure user isolation?
Did they just used ChrootDirectory
in sshd_config + different users like in How to create an isolated/jailed SFTP user?
+ <VirtualHost>
config with open_basedir
to prevent PHP code to access other accounts' files?
More generally, what were the main isolation techniques, preventing user1234
to access user5678
's files on the same server with some malicious PHP code?
Short answer, they struggled!
Sometimes a bit of evolutional history is a good way to understand where we came from and where we are are right now......
A simple web server had to bind to an IP address.
So that really meant if you restricted yourself to one port (80) you could only have one real domain per ip address (machine). However you could specify a directory of where content was , maybe a user $HOME dir.
File access was enforced by simply by user accounts permissions.
Your Unique User Identification (UUID) would be deemed enough to separate accounts.
Because the way the web server is architected, it doesn't really use the traditional way of user account privileges and permissions, the web server was generally running as root (in the worse cases) or as an under privilege user in the best cases. (www-data/nobody).
The best thing about a web server was that it could transfer files you wanted across a network to be rendered in a browser, the worse thing about a web server is that it could transfer files that you probably don't want too. (/etc/passwd).
https://cwiki.apache.org/confluence/display/httpd/PrivilegeSeparation.
Then came along the apache virtual host directive. This allowed the web server to identify what domain the client web browser wanted.
The invention of a Vhosts, so the web server could serve files dependant on the host header name, not the ip address of the server.
https://httpd.apache.org/docs/current/vhosts/examples.html
File transfer services such as FTP/SSH were linked via your username to an area that you had the permission you could write too... (these systems also had their own security problems).
PHP caught on at the same time, customer demand for being able to write active scrips mushroomed. They wanted to be able to run on the web server dynamically and they wanted it NOW!.
So, it was a matter of trying to secure a unix system, where everything is effectively running as the same UUID... can you see a problem starting to occurring?
This started the web server security arms race!!!
An attack would be discovered, and then a patch or way of dealing with it would be deployed... that usually meant more code in production... or restricting what the web server could do in terms of configuration.
Sometimes this would be bug in the actual code, sometimes configuration would allow for a flaw to be exploited. The worse case scenario was prevent your users from access some kind a feature they relied on!!
followsymlinks on apache why is it a security risk
So, the hosts deployed fix after fix, patch after patch. Like all security controls, you start to restrict by configuration, or patch code to 'do the right thing' in terms of security, you start to break compatibility, and start to come up with integration issues.
Add more arms race technology like SELINX, although you can create a secure web server, you break so much software, that it becomes useless... it's either works or becomes so hard to manage it becomes unmanageable. Now multiply that with a X amount of users on a the same machine... Every layer of security added could break existing PHP scripts of make debugging them extremely difficult.
You could get to a point where you would be ultra secure, but nothing would actually run..... ;-).
Allowing customers to upload their own scripts to the machine or a flaw where attacks could do the same, could lead to a comprise of that server, allow an attacker gain control not only of the that account, by escalate them to root privileges on that machine.
Even local privilege escalation bugs are problems, because when you running your a active scripting language like PHP, you are effectively local on the machine.
The bad news is , that all this is still with us.
The good news Docker and other containers / virtual machine technology only shift the problem along. However, you can use a much simpler configuration ,use less code in the container to do the same thing.
Also, configurations can be much much simpler, and can be actually managed effectively.
You can probably see why there now a shift away from large 'kitchen sink' web servers and 1000's of accounts on the same machine.
On shared Hostings you had usually and still now have only a FTP Access. You never got and still do not get a SSH access to the actual File Server.
The Database if you purchased on is assigned to you on another Database Server which is not guaranteed to be even on the same physical host.
If you purchased an Email Account this also was not guaranteed to be located on any specific physical Host.
The Hosting Provider manages today and as always all the Traffic Forwarding for the Hosting Clients.
Mostly you are required to purchase a Domain together with your File Hosting.
On the Administration Panel you configured the Root Directory and Subdomains for your Hosting.
If your Files reside on a physical Host, the Hosting Provider never told you what would be the Canonical Name of this Host, if there would be any.
Still today it works like this.
The Containerization was introduced transparently to the Hosting Clients on the Server Side.
Still classical Web Applications like "WordPress" build on this Architecture.
They only require you to provide a FTP Account and a Database Account.
Containerization mostly only benefits the Hosting Provider because that way he can swiftly move the Client Content transparently to the Client about where ever he sees fitting.
I found all these security hardening practices are always valid even in a containerized Internet.
On the Server Side there are also some security hardening configurations on System Level that should belong to any Server Setup.
The first line of defense against Attacks from the internet is the
With Apache you change the directory browsing capabilities in the main
httpd.conf
configuration file:With PHP you can disable functionalities that enable the Engine to run on System Level with the
disable_functions
Option like:The best is to avoid any Cron Jobs on System Level.
If you can't avoid them completely they must run on User Level as the FTP User with
sudo ftp_user
or in it's own crontab withcrontab -u ftp_user
:On Web Application Level there is also a security risk that is far too often underestimated by Web Developers.
as also pointed out before by @the-unix-janitor and at the referenced Answer at: https://serverfault.com/a/244612/460989
Any Cache Files must be created on a Directory Level that is not accessible from the Internet like:
This falls into the Responsibility of Web Developer because you can change and create it from your FTP Account.
Often WordPress Plugin Developers incite you to give Write Access to their plugin within the
wp-content
Directory. But this is a really bad practice and bad style.cache_file.php
:If
cache_file.php
as Cache File is writable an Attacker can replace the Content by Malicious Code which your Application loads directly into Memory and executes it.Compiled Twig Templates are PHP code that is generated on the fly by the Application. They must not reside within your
public_html
Directory.