For users of my web service running on Ubuntu Server 16.04 or 18.04, I’d like to integrate features from such useful programs as ImageMagick. For example, I’d like to crop their profile pictures or create thumbnails.
But because software in general has bugs, and especially ImageMagick has a lot of them [1] [2], it would obviously best to isolate the execution of ImageMagick from the rest of the server, right?
So what’s the best way to isolate ImageMagick from a security perspective, taking into account that the setup should be as simple as possible and that ImageMagick will have to run every few seconds (or even multiple times per second at peak times)? Ideally, ImageMagick would not only be isolated from the host machine, but ImageMagick executions (with the data they operate on) would be isolated from each other as well.
I guess one can use a VM or containers (e.g. Docker) for this? Are containers better-suited because they are faster to set up and tear down again?
Moreover, what’s a good way to get started? I have taken a look at various manuals, but don’t know where exactly to start and which components I need.
What I have so far is the following. Though I don’t know if that’s actually secure, and the costly installation of packages should only be done once, if possible. Moreover, I’m not sure if this actually allows for parallel execution by multiple users.
Dockerfile:
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install -y imagemagick
VOLUME ["/my-images"]
WORKDIR /my-images
ENTRYPOINT ["convert"]
Initialization:
docker build -t my-imagemagick .
Usage:
docker run --rm --volume=$(pwd):/my-images:rw my-imagemagick -resize 500 /my-images/input.jpg /my-images/output.jpg
Containers in general
A docker container or KVM behaves like a real system. You can even go into a container and make changes there, no problem. Work with the bash, install packages, and so on (just in case you forgot something or want to check something).
The advantage of a container is, that it is much easier to upgrade and rebuild. With your own docker registry, you could even have differently named versions alongside each other.
Store the user data in an extra container
I'd suggest storing the data in a data container attached to the docker container. This would prevent the container to have mount points which end up on a folder on your real server. Instead, use a data container and attach/detach the data container, when you rebuild the ImageMagick docker container and reattach the data container to it at any given time. No data lost, but you have all your files in a container which also can be moved quite easily.
About security
As mentioned above a container behaves like a real system, so it also can be hacked like a real system. Means your attacker still can get hold of customer data and root access to this machine. Breaking out of a docker container ain't that easy but might be possible, due to missing patches, etc. I myself have a simple trick I use to make it harder for an attacker. I simply rebuild the container per crontab every few hours/a day. So if an attacker gains access, he'd have to start all over again a few hours/a day later.
Also try to attach an nginx container to it, so that ImageMagick is triggered by a reverse proxy instead of giving access to ImageMagick itself. It is easy to build up a container production chain nginx --> Imagemagick --> data container
So you'd end up with 2-3 containers which are chained together into an isolated production chain.
Hth,
Jens
P.S.: To get more into detail you'd need more knowledge about docker. But your suggested way is the right one in my opinion.
P.P.S.: You could also keep it old school and simply use a chroot environment. But harder to set up, harder to maintain, located on your server itself.
You can use one container that resizes all images located in mounted folder. You need to make slide changes in Docker file and build Docker image.
Build Docker image.
Run docker container and resize images located in /my-images folder at once.
You can also run command as cronjob to schedule it. TO allow multiple users on your system to run this command. You need to add users in docker group.