I'm using rsyslog to get logs from haproxy and put them into logstash for elasticsearch/kibana.
Everything was working ok, but i've found something strange in rsyslog.
I've found that i've got missing data in kibana. The reason is rsyslog.
Queue on disk was held and stopped for some days
So I've got missing data from weekend, but yesterday and today - all is ok.
Rsyslog now gets data and put them in logstash, but he seems to forgot about data stored in his own queue (I think he sees that they are old, and ignore them there is parameter for that, and maybe some default value is used? )
Right now, logstash is idle, i can force to him many additional data from rsyslog queue
So What i want to do is to :
Temporarly try to flush this queue (like postfix flush) or if this is not possible, what should i try to do?
My rsyslog config is:
$ActionSendTCPRebindInterval 500
$ActionQueueType LinkedList
$ActionQueueFileName kibana
$ActionQueueMaxFileSize 100m
$ActionQueueMaxDiskSpace 100g
$ActionQueueTimeoutEnqueue 0
$ActionResumeRetryCount -1
$ActionQueueSaveOnShutdown on
0 Answers