EDIT #2 July 23, 2015: Looking for a new answer that identifies an important security item missed in the below setup or can give reason to believe everything's covered.
EDIT #3 July 29, 2015: I'm especially looking for a possible misconfiguration like inadvertently permitting something that could be exploited to circumvent security restrictions or worse yet leaving something wide open.
This is multi-site / shared hosting setup and we want to use a shared Apache instance (i.e. runs under one user account) but with PHP / CGI running as each website's user to ensure no site can access another site's files, and we want to make sure nothing's being missed (e.g. if we didn't know about symlink attack prevention).
Here's what I have so far:
- Make sure PHP scripts run as the website's Linux user account and group, and are either jailed (such as using CageFS) or at least properly restricted using Linux filesystem permissions.
- Use suexec to ensure that CGI scripts can't be run as the Apache user.
- If needing server-side include support (such as in shtml files), use
Options IncludesNOEXEC
to prevent CGI from being able to be run when you don't expect it to (though this shouldn't be as much of a concern if using suexec). - Have symlink attack protection in place so a hacker can't trick Apache into serving up another website's files as plaintext and disclosing exploitable information like DB passwords.
- Configure
AllowOverride
/AllowOverrideList
to only allow any directives that a hacker couldn't exploit. I think this is less of a concern if the above items are done properly.
I'd go with MPM ITK if it wasn't so slow and didn't run as root, but we're specifically wanting to use a shared Apache yet make sure it's done securely.
I found http://httpd.apache.org/docs/2.4/misc/security_tips.html, but it wasn't comprehensive on this topic.
If it's helpful to know, we're planning to use CloudLinux with CageFS and mod_lsapi.
Is there anything else to make sure to do or know about?
EDIT July 20, 2015: People have submitted some good alternate solutions which are valuable in general, but please note that this question is targeted only regarding the security of a shared Apache setup. Specifically is there something not covered above which could let one site access another site's files or compromise other sites somehow?
Thanks!
I completely agree with the items you have so far.
I used to run such a multi-user setup a few years ago and I basically found the same trade-off: mod_php is fast (partly because everything runs inside the same process) and suexec is slow but secure (because every request forks a new process). I went with suexec, because user isolation was required.
Currently there is a third option you might consider: give every user their own php-fpm daemon. Whether this is feasible depends on the number of users, because every on of them has to get at least one php-fpm process using their user account (the daemon then uses a prefork like mechanism to scale for requests, so the number of processes and their memory usage may be limiting factors). You will also need some automated config generation, but that should be doable with a few shell scripts.
I have not used that method in large environments but IMHO that is a good solution to provide good PHP website performance while still isolating users on the process level.
Everything you have so far seems well thought out. The only thing that I could see as a problem is the fact most exploits seek to gain root access in one way or another. So even if each site and its corresponding processes and scripts are jailed correctly and everything has its own user and permissions a hacker with root couldn't care less, they will just sidestep everything you've setup.
My suggestion would be to use some sort of VM software(VMware, VirtualBox, Qemu, etc) to give each site it's own OS jail. This allows you, as a system admin, to not worry about a single compromised site. If a hacker gain root by exploiting php (or any other software) on a site's VM just pause the VM and dissect it later, apply fixes, or roll back to an unbroken state. This also allows the site admins to apply specific software or security setting to their specific site environment (which might break another site).
The only limitation to this is your hardware but with a decent server and the correct kernel extensions this is easy to deal with. I've successfully ran this type of setup on a Linode, granted both the Host and the Guest were very very sparse. If your comfortable with the Command line which I assume you are you shouldn't have any problems.
This type of setup reduces the number of attack vectors you have to monitor and allows you to focus on the Host Machines security and deal with everything else on a site by site basis.
I would suggest having each site run under its own Apache daemon, and chrooting Apache. All system php function will fail since the Apache chroot environment will not have access to /bin/sh. This also means that php's mail() function won't also work, but if you're using an external mail provider to send out mail from your email application, then this shouldn't be a problem for you.
SELinux might be helpful with
mod_selinux
. A quick howto is featured here:How can I use SELinux to confine PHP scripts?
As the instructions are a little dated, I checked whether this works on RHEL 7.1:
I've used Fedora 19's version and compiled with mock against RHEL 7.1 + EPEL.
YMMV if you use the basic epel config mock ships with:
Upgrade your target system first to ensure that
selinux-policy
is current.Install on target box (or put in on your local mirror first):
There are a lot of good technical answers provided already (please also have a look here: https://security.stackexchange.com/q/77/52572 and Tips for Securing a LAMP Server), but I still would like to mention here an important point (from yet another perspective) about the security: security is a process. I'm sure you have considered this already, but I still hope it could be useful (also for other readers) to sometimes rethink it.
E.g., in you question you concentrate mainly on the technical measures: "this question is targeted only regarding the security of a shared Apache setup. Specifically, are there any security steps that are important to take but are missing from the list above when running shared Apache and PHP."
Almost all answers here and on other 2 questions I mentioned also seems to be purely technical (except the recommendation to stay updated). And from my point of view this could make some readers a misleading impression, that if you configure your server according to the best practice once, then you stay secure forever. So please do not forget about the points that I miss in answers:
First of all, do not forget, that security is a process and, in particular, about "Plan-Do-Check-Act" cycle, as recommended by many standards, including ISO 27001 (http://www.isaca.org/Journal/archives/2011/Volume-4/Pages/Planning-for-and-Implementing-ISO27001.aspx). Basically, this means that you need to regularly revise your security measures, update and test them.
Regularly update you system. This will not help against targeted attacks using zero-day vulnerabilities, but it will help against almost all automated attacks.
Monitor your system. I'm really missing this point in answers. From my point of view, it is extremely important to be notified as early as possible about some problem with your system.
This is what statistics says about it: "Average time from infiltration to discovery is 173.5 days" (http://www.triumfant.com/detection.html), "205 median number of days before detection" (https://www2.fireeye.com/rs/fireye/images/rpt-m-trends-2015.pdf). And I hope that these numbers is not what we all want to have.
There are a lot of solutions (including free) not only for monitoring the state of the service (like Nagios), but also intrusion detection systems (OSSEC, Snort) and SIEM systems (OSSIM, Splunk). If it becomes too complicated, you could at least enable something like 'fail2ban' and/or forward you logs to separate syslog server, and have e-mail notifications about important events.
Again, the most important point here is not which monitoring system you choose, the most important is that you have some monitoring and revise it regularly according to your "Plan-Do-Check-Act" cycle.
Be aware of vulnerabilities. Same as monitoring. Just subscribe to some vulnerability list to be notified, when some critical vulnerability is discovered for Apache or other service important for your setup. The goal is to be notified about most important things that appear before your next planned update.
Have a plan what to do in case of an incident (and regularly update and revise it according to your "Plan-Do-Check-Act" cycle). If you ask questions about secure configuration, it means that security of your system becomes important for you. However, what should you do in case if you system got hacked despite of all security measures? Again, I do not mean only technical measures here like "reinstall OS": Where should you report an accident according to the applicable law? Are you allowed to shutdown/disconnect your server immediately (how much does it cost for your company)? Who should be contacted if main responsible person is on vacation/ill?
Have a backup, archive and/or replacement/replication server. Security also means availability of your service. Check your backup/archive/replication regularly and also test restore procedures regularly.
Penetration testing? (again, see "Plan-Do-Check-Act" cycle:) If it feels like too much, you could at least try some free online tools, that scan your web services for malware and security issues.
Your use case is ideal for docker containers.
Each container can represent a customer or client, with unique user IDs assigned to each Apache container group as added security. The key would be to drop root privileges on container start, before starting your apache stack. Each customer gets their own DB service with their own unique passwords, without the headache of standing up dozens of virtual machines, each requiring their own special-snowflake kernels and other overhead. After all, at the heart of docker is the chroot. Properly administered, I'd take that over a typical virtual cluster any day.
Lots of good suggestions here already. There's stuff that's been missed in the discussion so far though.
Pay attention to processes outside of those run as part of serving web pages. i.e. make sure that all your cron jobs that touch untrusted data are running as the appropriate user and in the appropriate jail, whether those jobs are defined by the user or not.
In my experience things like log analysis, when provided by the hosting service, is run as root almost as often as not, and the log analysis software is not given as much security auditing as we might like. Doing this well is a little tricky, and setup dependent. On the one hand, you don't want your root-owned (i.e. the parent process) apache process writing to any directory the user could compromise. That probably means not writing into the jail directly. On the other hand you need to make those files available to processes in the jail for analysis, and you'd like that to be as close to real-time as possible. If you can give your jails access to a read-only mount of a file system with the logs, that should be good.
PHP apps typically don't serve their own static files, and if you have a shared apache process then I'm guessing that your apache process is reading stuff straight out of the jails from the host environment? If so, then that opens up a variety of concerns.
.htaccess
files are an obvious one, where you'd need to be very careful what you allow. Many if not most substantial php apps are very dependent on .htaccess file arrangements that you probably can't allow without subverting your planned scheme.Less obvious is how apache decides what is a static file anyway. e.g. What does it do with a
*.php.gif
or*.php.en
file? If this mechanism or another fools the discrimination as to what is a static file, is it possible for apache to run php at all from outside the jail? I'd set up a separate light weight web server for static content, which is not configured with any modules for executing dynamic content, and have a load balancer deciding which requests to send to the static server and which to the dynamic one.Regarding Stefan's Docker suggestion, it is possible to have a single web server which sits outside the container, and which talks to php daemons in each container for the dynamic content, while also having a second web server, which sits in a docker container, and which shares the volumes each uses for their content, and is thus able to serve the static content, which is much the same as in the previous paragraph. I commend docker amongst the various jail type approaches, but with this or other jail type approaches, you will have a bunch of other issues to work through. How does file upload work? Do you put file transfer daemons in each container? Do you take a PAAS style git based approach? How do you make logs generated inside the container accessible, and roll them over? How do you manage and run cron jobs? Are you going to give the users any sort of shell access, and if so, is that another daemon within the container? etc, etc.
First thing I don't see is process management so one process cannot starve another process of CPU or RAM (or I/O for that matter, though your filesystem may be architected to prevent that). One major advantage of a "containers" approach to your PHP instances vs. trying to run them all on one "OS" image is that you can restrict resource utilization better that way. I know that's not your design, but that's something to consider.
Anyway, back to the use case of PHP's running behind Apache basically functioning as a proxy. suexec does not prevent something from running as the apache user - it provides the capability to run as another user. So one concern is going to be making sure that is all done properly - the doc page for it calls out that potential danger: https://httpd.apache.org/docs/2.2/suexec.html. So, you know, grain of salt and all that.
From a security standpoint it can be helpful to have a restricted set of user binaries to work with (which cagefs supplies), particularly if they are compiled differently or against a different library (e.g. one that does not include capabilities that are unwanted) but the danger is that at that point you are no longer following a known distribution for updates, you are following a different distribution (cagefs) for your PHP installations (at least with respect to user space tools). Though since you're probably already following a specific distribution with cloudlinux that's an incremental risk, not necessarily interesting on its own.
I would leave AllowOverride in where you might have intended it. The core idea behind defense in depth is to not rely on one single layer to protect your whole stack. Always assume something can go wrong. Mitigate when that happens. Repeat until you've mitigated as well as you can even if you have only one fence in front of your sites.
Log management is going to be key. With multiple services running in isolated filesystems, integrating activities to correlate when there is a problem could be a minor pain if you haven't set that up from the beginning.
That's my brain dump. Hope there's something vaguely useful in there. :)