If log files (or log data) contain sensitive information which needs to be protected from being deleted, manipulated or prevented from "log injection", what safety measures are considered best-practice?
What do I have to prove, that my logs are authentic and reliable, if they were used as evidence in court?
Centralizing your logging to a hardened server might provide what you need. Better yet, if you can capture the logging into a database on a hardened server, you open up all kinds of possibilities.
I hate to bug you about this, but what kind of logging are you talking about? NT Event log? Unix(y) Syslog (and friends)? Logging from a minicomputer? A little more info might lead you to a known solution for what you're looking for...
2009-05-18 Re-Edit:
If you can get all of your data to show up as a syslog-style entry, then this will work for you.
For Unix(y)-style machines, use whatever syslog facility is in the box. You will want to ship off all data that is recorded to the central log server, but also, leave the stock settings for recording logs locally. (more on that in a bit)
For those Unix-style services that do not produce "true syslog" output, there are usually reparsers that can regenerate the data into something useful. Prime examples: apache and squid have log formats that, for most installations, are not formatted for syslog. Regenerate that data on a half-hour basis (or whatever works for you). The central log server then runs out and picks the data up for its digest.
For Windows-style machines, use NTSyslog, which is a free service that will shunt event log entries to a network syslog server. A quick how-to is available that covers setup.
Once all machines are "logging", you'll need to designate a central logging server. Go to the Splunk website and read up on it a bit; when you're ready, download it to your central logging server, and install it on this machine. The free version handles up to 500Mbyte per day, which, unless you have a bat-crazy amount of logging to contend with, should more than suffice. The splunk service will accept all of your syslog input, catagorize it, and store it in a local database. From a webpage, you can filter, select, see events by time, etc. Very handy for seeing a composite picture across systems. It can also hook up to a variety of different data "sources", including flat files, which means those apache and squid logs can be transformed into entries by splunk (provided you hook it up to each machine that requires it).
A helpful side-effect of this setup is that all of the local logging data is still there - nothing is lost, so even if your central log server goes down, the logs are valid elsewhere. And if the machine is lost (hdd blows up, security breach, whatever) you still have a history of the data on the central server.
Once all your machines are syslog'ing and your service is splunk'ing, point all of the syslog machines at the splunk server.
From an audit perspective, you want some sort of centralized system where the administrators of the other systems do not have administrative rights over it. There are plenty of solutions which will automatically retrieve and export the logs to a central system. You're basically looking for what is called a log management or a security information management (SIM) product. As to which one, that depends on a lot of factors, such as budget, current solutions in house, etc.
As far as from a forensics perspective, the biggest thing is to be able to show a chain of custody in order to establish the evidence hasn't been tampered with. This goes beyond just the tools. It's also the procedures used to handle the evidence when a security event occurs and as the investigation proceeds and verifies that an incident has occurred and moves into evidence collection and damage assessment. For that sort of thing, you'll probably want to take a look at having folks with the appropriate SANS incident handling and forensics training.
What about adding a checksum (sha1 for example) of the log entry for every log entry. Of course, if somebody has access to this database (or wherever you are going to store the logging data) will be still able to add a fake entry, with a valid checksum. But if you add some salt before generating the checksum, I think it will be impossible to generate the valid checksum and it will be very easy to check for the authenticity of it.