I've set up Prometheus' Alertmanager to, well, manage alerts from Prometheus. I've got the alerts coming in from Prometheus to Alertmanager, but there the story ends. The alerts never get sent on by Alertmanager to my email endpoint.
In order to figure out where exactly inside Alertmanager that the alerts end their journey I want to turn the log level from info to debug, but have been unable to figure out how. Even finding the log seems like a tough ask right now, its not in /var/log
and journalctl -u alertmanager
contains so little that it may be that there is another log somewhere.
The manual page for configuring Alertmanager does not mention debug level. I've looked through the source code for mentions of log and found that the setting should be named log.level
. Adding the following snippet to the configuration YAML didn't help either:
log:
level: debug
as Alertmanager failed to start with a failure to parse its config file.
The answer is that its not possible to set Prometheus' Alertmanagers log level to debug through the config file, it is only possible through commandline arguments. Do not ask me why, I'm sure they had their reasons.
Through Puppet I added the argument to the Systemd unit file for Alertmanager, so that it ended up looking like this:
If you are starting Alertmanager from your shell you can just add the flag
--log.level=debug
to your invocation.The debug messages can then be seen via
journalctl -u alertmanager
on Linux distributions with the Systemd init system.On Ubuntu systems (if prometheus-alertmanager was installed via apt and from official repo).
Open
And add
to file (replace ARGS="")
Then restart prometheus-alertmanager via systemd:
Show logs via journalctl: