The tac
command (cat
reversed) can be used to read a file backwards, just like cat
reads it rom the beginning. I wonder, how efficient this is. Does it have to read the whole file from the beginning and then reverses some internal buffer when it reaches the end?
I was planning on using it for some frequently called monitoring script which needs to inspect the last n lines of a file that be several hundreds of megabytes in size. However, I don't want that to cause heavy I/O load or fill up cache space with otherwise useless information by reading through the file over and over again (about once per minute or so).
Can anyone shed some light on the efficiency of that command?
When used correctly,
tac
is comparably efficient totail
-- reading 8K blocks at a time, seeking from the back."Correct use" requires, among other things, giving it a direct, seekable handle on your file:
...or...
NOT
That said, I'd consider repeatedly running a tool of this nature a very inefficient way to scan logs, compared to using
logstash
or a similar tool that can feed into an indexed store and/or generate events for real-time analysis by a CEP engine such as Esper or Apache Flink.