I would like my log data for servers such as Nginx to go directly to Logstash over a network. Logstash has TCP and UDP socket handlers built-in that would be perfect for receiving the data, but sending it is a problem. I know that there is the shipper and what not, but not having to manage logs on disk would be nice for projects where this log data isn't so critical (such as internal development servers).
Is there a crafty way I could create a socket, but one with a path on the file system for which Nginx would write to like it would any other log file? Assuming that's possible, how can I configure buffering and reconnection? I'd hate for the socket to die and the back pressure on logs start to tie up the server doing the logging.
I haven't tried it, but a named pipe with netcat on the far end would probably do the trick. In case netcat dies you should run it under a process supervision framework (such as daemontools, runit, or systemd) so it respawns automatically.
I wouldn't recommend this as a long-term solution, but it could be good for debugging whether logstash or your shipper is working as expected.
On the logstash side:
On your client's side: