I search a way for a reliable export of journalctl logs.
I could use the --since=...
option, but this is a bit fuzzy.
In my case a script would call journalctl --output=json
every ten minutes.
I don't want to miss a single line and (if possible) I would like to avoid duplicate lines.
Some days after asking this question I came across RELP: https://en.wikipedia.org/wiki/Reliable_Event_Logging_Protocol
You can install a syslog daemon such as rsyslog (the default on Red Hat derived systems). This will log all journal entries in a more backward compatible manner, and of course you can specify a custom log for whatever you wish.
If you don't need logs exported in real time, you can use
journalctl --since
as some people have mentioned. You can run it daily at midnight with the time specifieryesterday
to get exactly 24 hours of logs.If you really need to get logs at short intervals and don't want to miss a single entry, then you need to learn about the cursor. For each log entry journalctl will provide a cursor which can be used to skip to exactly that log entry with
--cursor
, or the immediately following log entry with--after-cursor
. Consider the following sample JSON:For your purposes, the
__CURSOR
is an opaque blob. Just capture the value from the last log entry you receive on one call tojournalctl
and feed it to the next call:Use the --since option. To get logs from the last 10 minutes, just use:
That will give you logs 10 minutes prior to the current time. See the man page https://www.freedesktop.org/software/systemd/man/journalctl.html and this page on time specifications for systemd https://www.freedesktop.org/software/systemd/man/systemd.time.html#
one way of doing it (not very reliable, but can work):
alternative way to do it; (assuming you need to import json output into
elasticsearch
):syslog-ng can read from the journal and export to plain old text files. You can also setup syslog-ng to send data to other systems (including elasticsearch).
you can create python script for polling journalctl with using
query_unique
function. Probably running as a service could help also, with restart option -- to not to miss anything.https://www.freedesktop.org/software/systemd/python-systemd/journal.html