On a development server (Ubuntu 14.04) we have this shell script running every minute on cron:
for dir in /home/*; do
username=$(basename $dir)
echo "changing ownership to $username in $dir/public_html"
chown -R $username:$username $dir/public_html
chmod 755 $dir
chmod -R 775 $dir/public_html
done
Which has the aim of changing ownership and chmod of any file to the virtualhost's username. This is because when a new file is created, it's created by 'root' as our local machines are SMB to the server using root user. The files need to be owned by virtualhost username in order for them to run.
This script works perfectly well, BUT will start to slow down the server when we get a lot of virutalhosts / files as it chowns everything regardless of if it needs it. Most of the time it won't need it as it's only new files which do.
My thoughts are to change it to foreach public_html directory look for any files which are not owned by that virtualhost username and chown that only.
But I fear that having to roll through every file to know which ones to chown might be just as (or more) intensive than the way it works at the moment (just looking at directory level then running recursively).
So was wondering if anyone has a better idea for this? (happy with sh or php script solution)
Aim: any newly created file, chown and chmod it to virtualhost username and 755. Skip all other files. Would also be good to have ability to skip certain filetypes regardless.
"chown -R" will not change any file that already has the desired ownership. It will probably do the job more efficiently than writing a program that does it yourself. chown is written by some very good coders, who probably have taken into consideration a lot of edge cases that you may not have. However, the only way to know is via a benchmark. (I'd love to see your results!)
Here is a simple script that will safely find files not owned by mary, and set their ownership to mary:
The "-xdev" means "don't cross mountpoints" and is a good safely mechanism. The "-print0" and "-0" options are to handle filenames with spaces and newlines in a secure way. This (and chmod -R) won't follow symlinks for security reasons... if someone made a symlink to
/
it would be very bad.Also, if you are running a cron job every minute, it is often better to run the same code in a loop. Here is an explanation of why: http://everythingsysadmin.com/2014/02/how-not-to-use-cron.html
I would use something like Watcher, which utilizes Linux kernel inotify functionality. If you are familiar with
incron
, then you know whatWatcher
is, the difference is that Watcher is recursive, whereasincron
can watch only a directory, not the sub-directories under it.So, instead of constantly crawling through the ever-growing /home, you can setup Watcher to receive signals from kernel whenever some file/directory changes under /home, and then run whatever commands or scripts you need for those files. This is far more faster and less resource intensive.
The best thing to do here is not have everyone connect as root, it may be expedient but as you have discovered leads to all sorts of other issues. This is what I would pursue as the first option ( and even if it was turned down initially I would continue to pursue it long term).
You may get some mileage out of the samba
force user
andforce group
directives using%u
and%g
substitutions.You may be able to use inotify which can detect filesystem changes and run a script.
My suggestion is to create an environment where files are created with the right perms/owner/group in the first place. Then "chmod -R"/"chown -R" can be used to fix the situation where people messed up.
"chmod ug+s" on a directory tells Linux that files created in that directory should not be created with the user's user/group but instead inherit the user/group of the parent directory.
The last time I needed to do something like this, I created an environment where only the group permissions really mattered. Therefore "chmod g+s" on the directory made sure that new files were owned by the right group. I then configured Apache to only need that group permission to be able to access the files.
The last thing I needed to do was to make sure users had "umask 002" set so that they created files that were default "group readable".
Occasionally someone would create a file with the wrong permissions despite my best efforts. I had a shell script that fixed all the permissions. However, it didn't need to run every minute... just once an hour. At that point, the efficiency wasn't so important.
The script looked something like: