I'm just getting started with Docker and richt now I'm trying to figure out how to set up my first dockerized Apache 2 / PHP environment. Up to now I have been using full Linux VMs, where I used log-files being written to /var/log/apache2, then use "logrotate" to hop to a new file each day.
Logfiles were mainly used for immediate error detection (i.e. log on to the server and use less to open the current access.log and error.log files) and for fail2ban.
If I'm correct that is not practicable in an Docker environment - mainly because you usually cannot log in to containers to have a look at the logs. Also logs will be lost if the container is removed.
So: What is the most common method to work with/emulate/replace access.log/error.log in that situation? What are common solutions for both production and development environments?
My ideas so far include using a NFS share (slow and may cause filename collisions if not careful), and logstash (not sure if it is worth the effort and practicable for smaller sites or even dev environments?) but I'm sure smart people have come up with better solutions?
Not sure if it makes a difference, but currently I'm basing my Docker image on php:5.6-apache.
You can still use
docker exec -it <your container name> /bin/bash
command to get into your container and do your regular job. Or maybe you can change/bin/bash
into your command or script.sh
of your command to execute it.To get your file out of container use
docker cp <container name:/path/to/file> </your local machine/path/>
And for your daily job, you can use
cron
to cronjob those commands. I highly reccommend you to have alias your frequent docker commands. So that I can use docker happily with a few key.The
docker logs <container name/id>
command is for viewing log from execution of the docker image. It shows redirect output to stdout.How about writing access and error log to stderr and stdout?
https://mail-archives.apache.org/mod_mbox/httpd-users/201508.mbox/%3CCABx2=D-wdd8FYLkHMqiNOKmOaNYb-tAOB-AsSEf2p=ctd6sMdg@mail.gmail.com%3E
https://gist.github.com/afolarin/a2ac14231d9079920864
Centralized logging with ELK would allow for more proactive monitoring though. But you already thought of that one yourself.
In the apache configuration file you can add:
CustomLog /dev/stdout
ErrorLog /dev/stderr
and to see the logs use the command below:
docker logs container_id
the docker image I chose just linked all *.log files to /dev/stdout and /dev/stderr, so I couldn't read them.
after removing the files and restarting apache I can get the logs from /var/log/ in the docker.
Maybe this feature did not exist when the question was asked, but with run's -v argument you can mount a directory on the host onto a directory in the container.
This way the log (or other) files will survive when the container is deleted and you can access the files as if apache were installed on the host rather than in a container.
Alternatively, you could somehow push modified log files to a central location. The Kibana stack uses filebeat to achieve this, but it should be possible to run filebeat independently if you do not care for the rest of the stack.
So far I have found "docker logs" being mentioned several times.
I'm an absolute Docker newb, so that might hold the solution to my problem - but so far I haven't fully understood the concept behind that command.
Docker seems to keep all stdout output in JSON files in /var/lib/docker/containers/ and gives me a chance to acess them through the logs command.
So far I'm not sure how to actually use the output.