Yes. It is possible and recommended for extra security (see why at the end).
We can use Nginx's reverse proxy capability for this. We run each host on a separate process (and user), working as individual web-servers. Nginx monitors the standard HTTP ports (80/443) and proxies the external requests to the appropriate hosts.
There are two main ways to achieve this on Nginx:
proxy_pass: Complex solution, but more flexible. You set a web-server instance for each virtual host. Usually it is implemented using containers (ex. docker) and setting up a linux service (systemd) to orchestrate it all;
fastcgi_pass: Simple and easy, but the implementation is different for each language and not supported by all languages. It has limitations, but works well for most basic cases;
PHP
For PHP users, you can easily achieve this with PHP-FPM.
If you are not sure you are using PHP-FPM, you probably are, as it is the default setup for Nginx with PHP.
The idea is to create many "spools", one for each host. Then, we associate a different user for each spool.
1. Create a user for each virtual host
Let's say we have the virtual host (aka "server block") mywebsite1. Its folder is on /var/www/mywebsite1.
First, we create a new user and group for it:
adduser myuser1
This command also creates a group with the same name (myuser1) and assigns the user to it.
Now, this user should be the only one to access its website folder and files:
chown -R myuser1:myuser1 /var/www/mywebsite1
I don't recommend adding the user to the "www-data" group.
See the "Nginx file access" section for more details and solutions if you need Nginx to access the website files.
2. Create spools
Open the main spool config file (ex. /etc/php/7.0/fpm/pool.d/www.conf) and add one spool for each virtual host:
Spool #1 (mywebsite1):
[mywebsite1]
user = myuser1
group = myuser1
...
listen = /run/php/mywebsite1.sock
...
listen.owner = www-data
listen.group = www-data
Spool #2 (mywebsite2):
[mywebsite2]
user = myuser2
group = myuser2
...
listen = /run/php/mywebsite2.sock
...
listen.owner = www-data
listen.group = www-data
Alternatively, you can create a new .conf file for each spool. That would make the files more organized.
The listen.owner and listen.group must be the same user Nginx uses (usually www-data). This doesn't mean the www-data user can access the website folder and its files. It's about the socket file permissions. It means www-data can send HTTP requests to the socket file.
Do not add the users to the www-data group. Otherwise, it may defeat the purpose of doing all of this.
An attacker that gains control over a website can access files owned by the www-data group, including Nginx files.
Also, do not change the ownership of the files and folders to the www-data group. Otherwise, an attacker that gains control over another host can access them. Keep them owned by user:usergroup.
Nginx doesn't require direct access to the website's folders and files.
The only reason to permit direct file access for Nginx is to serve static files like images, videos, etc., more optimized.
If you want to do this, see the next section, "Serving static content via Nginx".
Serving static content via Nginx
Let's say you allow Nginx access to the public static files for performance reasons. Suppose these public static files are located on /var/www/mywebsite1/public.
First, add an alias for them, and make "fastcgi_pass" not proxy the folder:
server {
...
location /public {
alias /var/www/mywebsite1/public;
}
location ~ ^/public/ {
fastcgi_pass unix:/run/php/mywebsite2.sock;
}
...
}
After that, we need Nginx to have permission to access the files.
There are a few options:
1. Add the website group to the www-data user
Make the Nginx user (www-data) be in the website group (myusergroup1):
usermod -aG myusergroup1 www-data
This lets Nginx access the websites' files, but not the other way around.
2. Allow all users to read the public assets folder
These files are publicly visible on the internet anyways.
# Only do this if you don't want the previous solution
chmod -R o+r /var/www/mywebsite1/public
Why use multiple users (security reasons)
Suppose you run all your websites under the same user (ex. www-data). In this case, a PHP call to system()/passthru()/exec() will have access to all websites!
NGINX will not protect you against this.
PHP is an example, but any popular web-server language has similar calls. As a hacker, you can ls .. to navigate through all websites. You can cp/echo/mv to write your code in any file (including other website files).
Even if the same person owns all websites on the server (ex., you), running each website with a different user is advisable, as it will prevent eventual hackers and viruses from accessing your other websites.
No, because all server stanzas in an nginx config are served from the same set of worker processes. Furthermore, from a security perspective, you're better to run it like that, as it means that the content is automatically unwritable by the webserver (absent stupidities like a chmod -R 0777), so that if there is a vulnerability in nginx, none of the content is at risk.
In response to Ivan's comment above and which seems applicable to the OP.
Two things:
The application document root would be something like /blah/peterWeb/html and /blah/johnWeb/html. Both NGINX and Apache2 would not allow one to peruse or operate in the other directory even if they are both running www-data as group.
Placing each directory tree under their own user permission would allow each user to ssh/login to a UNIX system and keep their directories private to each - just don't put each user into the www-data group. If you agree, then your sentence:
every user that can serve a PHP script or a cgi-bin process can access
any file accessible to the www-data user.
might be more accurately written as:
every user that you put in the same group as the apache/nginx server (www-data)
can then do whatever they want (including running a php script) in any file that is
accessible to it (which would essentially be everything on a web server).
EDIT 1:
Having to address some Server Admin issues I looked further into this topic. I was unaware of how accurate Ivan's information was! If you intend to give users the ability to upload and run scripts on a shared hosting configuration then take heed. Here is one approach. Hat tip to Ivan for making sure I understood this vulnerability.
There are so many tutorials/answers doing php(-fpm) users and groups the wrong way
Like Daniel's asnwer for example.
You shouldn't add site1_user, site2_user etc. to www-data/nginx group, but the other way around! (When I see www-data I automatically assume Apache is the webserver, as SUSE's nginx package creates accordingly named user and group, as any sane distro should...)
If you add both php user to the same web server group, you have to enable editing access sooner or later for nginx group on all of the sites' root: that just throws security out the window, you are basically back to nginx owning and running everything!
You don't want one set of users (or a single user) with write access to everything - that's the point! You want disjunct accesses: site1_group and site2_group, with only one - the webserver - singular entity in their (access) cross-section!
I see sentences like works either way around, that are so far from the truth I could scream.
It is not crystal clear in Daniel's answer (because he didn't mention any group at all!) but he is doing it the wrong way too: otherwise the pools' config wouldn't have listen.group = www-data, but listen.group = siteX_group.
The listen.user can be either the webserver's or the php/site's user, doesn't really matter if the owner group can write to the socket.
The comments under that question beautifully illustrate my point: either your nginx can't access the static content and your site is broken, or you made a security nonsense.
Even better than the first option: use ACL if you can!
ACLs solve this problem in a way, that you don't have to add a/any group (*for every frickin' site!) at all!
The compiled php-fpm have to support it, and the partition should be mounted with acl flag, so maybe it is not an option for the reader.
Use it however if it's available.
To share php's socket access with the webserver: Php has listen.acl_users where you should list both nginx and siteX_user. For the webroot, you set write access to both users on the webroot with setfacl -Rm u:nginx:rwx /path/to/siteX_root (for the existing files/directories) and setfacl -Rdm u:nginx:rwx /path/to/siteX_root (for access on the future files/directories)
The previous commands assume the siteX_user owns its webroot (that seems logical to me) and not nginx, but you could swap the owner user and the user in the setfacl commands' argument.
Run your (master) nginx/php-fpm processes with systemd's service instances!
Here is an article, so I don't have to write as much.
Basically you'll have services like php-fpm@site1, php-fpm@site2 etc., and they will have their own site1.conf, site2.conf etc - without actually writing all the systemd unit files...
You could apply this technique to the separate nginx instances Col. Shrapnel suggested, but the more important part is php-fpm: do that, if you do only one. I'd advise to copy your distro's default php-fpm systemd unit file as a starting point for the instance template, and NOT the one in the article: SUSE's packaged php-fpm unit file for example contains several additional security restrictions, like /dev write access blocking...
You could also opt to merge the php-fpm.conf and php-fpm.d/siteX.conf to a single file referenced by the templated service unit: as from now on, any php-fpm.conf would have a 1 to 1 include relation with its pool's conf anyway...
The benefits are in the article, but most importantly, one fpm pool restart/error/global_fork_limit won't bring down all of your sites together. That is reason enough in itself!
You could replace the previous option with docker - but why would you?
I know there are genuine reasons to use docker, but docker is way overused. Especially if you're running all your stuff on a single VPS, I recommend your distro's packages, because Docker:
uselessly wastes your resources
obscures settings: e.g.: not environment-variable-exported settings are hard to change, often times an image rebuild is needed
adds burden when you have to (trust me, you'll have to sometimes) go down into a container's shell: none of your aliases (or zsh) is available there, and most of that crippled Alpine based images haven't even got less or vi(m)
(I hear fanboys screaming) adds security, but systemd can do most of that isolation as well...
constantly requires additional maintenance like image updating and trash pruning. Your decent distro could already keep you up to date with its packages while requiring less storage. And you have to update the server's distro anyway...
maybe makes you to fiddle less on the initial setup, but: unless you use official software vendor images, who knows how long Random Joe will maintain his snowflake images, and the initial setup could be made easier without docker (see ansible/saltstack) anyway...
If you distribute/load-balance the crap out of your shiny K8 cluster (and if you do, I doubt you are here for this rookie Q/A; if you do use K8, and nevertheless are here for an answer, than this profession has its days numbered...) then use docker/podman/whatever. Otherwise, for the sake of your sanity: keep your abstractions/layers in bay...
I was looking for the solution myself and found two possible courses of action:
Several instances of Nginx
First, it is possible to run several instances of Nginx, each under the distinct user, which literally answers the question asked.
Of course you will need one main instance that listens to the 80/443 ports on the outside interface and proxies the respective requests to other instances that listen to different ports such as 127.0.0.1:8001 and the like. Simply create different config files with different users, and then run several instances as nginx -c /path/to/config_xxx
Setting correct folder permissions + different php-fpm users
Another solution to the actual problem, "way to hide user's files from another user" would be a combination of the Daniel's answer and making users directories inaccessible to each other yet letting Nginx to enter them.
For this, simply leave users directories under user:usergroup permissions but add www-data user to each user's group! By means of a command like usermod -a -G usergroup www-data. This simple (not to say - silly) trick will leave user folders' permissions intact, yet will let nginx to enter them and serve the static contents.
Yes. It is possible and recommended for extra security (see why at the end).
We can use Nginx's reverse proxy capability for this. We run each host on a separate process (and user), working as individual web-servers. Nginx monitors the standard HTTP ports (80/443) and proxies the external requests to the appropriate hosts.
There are two main ways to achieve this on Nginx:
PHP
For PHP users, you can easily achieve this with PHP-FPM.
If you are not sure you are using PHP-FPM, you probably are, as it is the default setup for Nginx with PHP.
The idea is to create many "spools", one for each host. Then, we associate a different user for each spool.
1. Create a user for each virtual host
Let's say we have the virtual host (aka "server block")
mywebsite1
. Its folder is on/var/www/mywebsite1
.First, we create a new user and group for it:
This command also creates a group with the same name (
myuser1
) and assigns the user to it.Now, this user should be the only one to access its website folder and files:
I don't recommend adding the user to the "www-data" group.
See the "Nginx file access" section for more details and solutions if you need Nginx to access the website files.
2. Create spools
Open the main spool config file (ex.
/etc/php/7.0/fpm/pool.d/www.conf
) and add one spool for each virtual host:Spool #1 (
mywebsite1
):Spool #2 (
mywebsite2
):Alternatively, you can create a new
.conf
file for each spool. That would make the files more organized.The
listen.owner
andlisten.group
must be the same user Nginx uses (usuallywww-data
). This doesn't mean thewww-data
user can access the website folder and its files. It's about the socket file permissions. It meanswww-data
can send HTTP requests to the socket file.3. Change the server blocks
Assign each server block to its spool.
Host 1:
Host 2:
4. Restart FPM and NGINX services
PS: Replace "php7.0-fpm" with your actual PHP-FPM.
5. Testing
Create a
pinfo.php
(or whatever name) file that will show the current process user:Or create the
pinfo.php
file via terminal:Then open
http://.../pinfo.php
on your browser.Nginx file access
Do not add the users to the
www-data
group. Otherwise, it may defeat the purpose of doing all of this.An attacker that gains control over a website can access files owned by the
www-data
group, including Nginx files.Also, do not change the ownership of the files and folders to the
www-data
group. Otherwise, an attacker that gains control over another host can access them. Keep them owned byuser:usergroup
.Nginx doesn't require direct access to the website's folders and files.
The only reason to permit direct file access for Nginx is to serve static files like images, videos, etc., more optimized.
If you want to do this, see the next section, "Serving static content via Nginx".
Serving static content via Nginx
Let's say you allow Nginx access to the public static files for performance reasons. Suppose these public static files are located on
/var/www/mywebsite1/public
.First, add an alias for them, and make "fastcgi_pass" not proxy the folder:
After that, we need Nginx to have permission to access the files.
There are a few options:
1. Add the website group to the
www-data
userMake the Nginx user (
www-data
) be in the website group (myusergroup1
):This lets Nginx access the websites' files, but not the other way around.
2. Allow all users to read the public assets folder
These files are publicly visible on the internet anyways.
Why use multiple users (security reasons)
Suppose you run all your websites under the same user (ex.
www-data
). In this case, a PHP call tosystem()
/passthru()
/exec()
will have access to all websites!NGINX will not protect you against this.
PHP is an example, but any popular web-server language has similar calls. As a hacker, you can
ls ..
to navigate through all websites. You cancp
/echo
/mv
to write your code in any file (including other website files).Even if the same person owns all websites on the server (ex., you), running each website with a different user is advisable, as it will prevent eventual hackers and viruses from accessing your other websites.
No, because all server stanzas in an nginx config are served from the same set of worker processes. Furthermore, from a security perspective, you're better to run it like that, as it means that the content is automatically unwritable by the webserver (absent stupidities like a
chmod -R 0777
), so that if there is a vulnerability in nginx, none of the content is at risk.In response to Ivan's comment above and which seems applicable to the OP. Two things:
The application document root would be something like
/blah/peterWeb/html
and/blah/johnWeb/html
. Both NGINX and Apache2 would not allow one to peruse or operate in the other directory even if they are both running www-data as group.Placing each directory tree under their own user permission would allow each user to ssh/login to a UNIX system and keep their directories private to each - just don't put each user into the www-data group. If you agree, then your sentence:
might be more accurately written as:
EDIT 1: Having to address some Server Admin issues I looked further into this topic. I was unaware of how accurate Ivan's information was! If you intend to give users the ability to upload and run scripts on a shared hosting configuration then take heed. Here is one approach. Hat tip to Ivan for making sure I understood this vulnerability.
I'd like to elaborate on the fine answer Col. Shrapnel posted before I could.
There are so many tutorials/answers doing php(-fpm) users and groups the wrong way
Like Daniel's asnwer for example. You shouldn't add
site1_user
,site2_user
etc. towww-data
/nginx
group, but the other way around! (When I seewww-data
I automatically assume Apache is the webserver, as SUSE's nginx package creates accordingly named user and group, as any sane distro should...)If you add both php user to the same web server group, you have to enable editing access sooner or later for
nginx
group on all of the sites' root: that just throws security out the window, you are basically back tonginx
owning and running everything!You don't want one set of users (or a single user) with write access to everything - that's the point! You want disjunct accesses:
site1_group
andsite2_group
, with only one - the webserver - singular entity in their (access) cross-section!I see sentences like works either way around, that are so far from the truth I could scream.
It is not crystal clear in Daniel's answer (because he didn't mention any group at all!) but he is doing it the wrong way too: otherwise the pools' config wouldn't have
listen.group = www-data
, butlisten.group = siteX_group
.The
listen.user
can be either the webserver's or the php/site's user, doesn't really matter if the owner group can write to the socket. The comments under that question beautifully illustrate my point: either your nginx can't access the static content and your site is broken, or you made a security nonsense.Even better than the first option: use ACL if you can!
ACLs solve this problem in a way, that you don't have to add a/any group (*for every frickin' site!) at all!
The compiled php-fpm have to support it, and the partition should be mounted with acl flag, so maybe it is not an option for the reader. Use it however if it's available.
To share php's socket access with the webserver: Php has
listen.acl_users
where you should list bothnginx
andsiteX_user
. For the webroot, you set write access to both users on the webroot withsetfacl -Rm u:nginx:rwx /path/to/siteX_root
(for the existing files/directories) andsetfacl -Rdm u:nginx:rwx /path/to/siteX_root
(for access on the future files/directories) The previous commands assume the siteX_user owns its webroot (that seems logical to me) and notnginx
, but you could swap the owner user and the user in thesetfacl
commands' argument.Run your (master) nginx/php-fpm processes with systemd's service instances!
Here is an article, so I don't have to write as much. Basically you'll have services like php-fpm@site1, php-fpm@site2 etc., and they will have their own site1.conf, site2.conf etc - without actually writing all the systemd unit files...
You could apply this technique to the separate nginx instances Col. Shrapnel suggested, but the more important part is php-fpm: do that, if you do only one. I'd advise to copy your distro's default php-fpm systemd unit file as a starting point for the instance template, and NOT the one in the article: SUSE's packaged php-fpm unit file for example contains several additional security restrictions, like /dev write access blocking...
You could also opt to merge the php-fpm.conf and php-fpm.d/siteX.conf to a single file referenced by the templated service unit: as from now on, any php-fpm.conf would have a 1 to 1 include relation with its pool's conf anyway... The benefits are in the article, but most importantly, one fpm pool restart/error/global_fork_limit won't bring down all of your sites together. That is reason enough in itself!
You could replace the previous option with docker - but why would you?
I know there are genuine reasons to use docker, but docker is way overused. Especially if you're running all your stuff on a single VPS, I recommend your distro's packages, because Docker:
less
orvi(m)
If you distribute/load-balance the crap out of your shiny K8 cluster (and if you do, I doubt you are here for this rookie Q/A; if you do use K8, and nevertheless are here for an answer, than this profession has its days numbered...) then use docker/podman/whatever. Otherwise, for the sake of your sanity: keep your abstractions/layers in bay...
I was looking for the solution myself and found two possible courses of action:
Several instances of Nginx
First, it is possible to run several instances of Nginx, each under the distinct user, which literally answers the question asked.
Of course you will need one main instance that listens to the 80/443 ports on the outside interface and proxies the respective requests to other instances that listen to different ports such as 127.0.0.1:8001 and the like. Simply create different config files with different users, and then run several instances as
nginx -c /path/to/config_xxx
Setting correct folder permissions + different php-fpm users
Another solution to the actual problem, "way to hide user's files from another user" would be a combination of the Daniel's answer and making users directories inaccessible to each other yet letting Nginx to enter them.
For this, simply leave users directories under
user:usergroup
permissions but addwww-data
user to each user's group! By means of a command likeusermod -a -G usergroup www-data
. This simple (not to say - silly) trick will leave user folders' permissions intact, yet will let nginx to enter them and serve the static contents.