Is it true that the life of the modern server machines is longer if you never turn them off instead of turning them off when not needed?
I am talking about database server machines that are not used - at all - during the weekend.
If yes, where I can find an article/paper with the proof that an always running server has less hardware failures over the years than one that is periodically turned off?
You're going to be much better off shutting them down than hibernating. On every platform I've ever used, hibernation is fraught with problems, errors and random flakiness. If shutting the servers down over the weekend, and (more importantly) validating that they come up properly on Monday is acceptable practice for your environment, I strongly recommend that over Hibernation or Sleep.
No, you're quite unlikely to find a paper with proof, because the study would literally have to be run over the course of the hardware's lifecycle (3-5 years), and by then the article would be completely irrelevant because the model would be so out-dated as to be nearly useless for production. You could make the argument that it's a 'general study', but technology advances quickly enough that the counter-arguments of recent technology improvement are strong enough to make the entire discussion meaningless.
There are two chief arguments at play here:
Running servers constantly is less stressful on their components
This is predicated on the (correct) theory that rotating parts (fans, hard drives) take more abuse during spin-up than they do for continuous operation, and the fact that sometimes those rotating parts won't start rotating again.
Some of this belief is dated (head "stiction" is not really as big a deal these days as it was say 30 years ago), but there is still substantial truth in this: Ask anyone who has ever powered down a long-running RAID array or SAN and turned it back on about spin-up failures.
There is also thermal action (repeated heating/cooling of chips - like the CPU) causing failures -- This was a lot more common in "The Good Ol' Days", and in fact many Commodore Amiga owners can tell you about the "pencil fix" for chips that would work loose from their sockets because of this.
Running servers when they're not used is EXPENSIVE - And the counterpoint that the savings in power/cooling can buy you a new server (or parts).
This depends on the cost of your server and the cost of power, but it's an important factor. Even if you're idling the CPU you're still paying to spin disks (and if you spin them down you're not eliminating any of the risks in (1) above.
Where I am power is HIDEOUSLY expensive, and server hardware reliability is fairly good even with multiple power cycles, so if I have machines that are totally unused and suck a substantial amount of power I would consider shutting them down (based on the fact that the savings from one box can buy it replacement hard drives).
If the box happens to suffer a failure 5 years later it's already past its amortization date, and replacing it won't be a huge issue.
If you are not using them at all during the weekend, why not just shut them down to be safe? There is no real advantage of hibernating over shutting down, the only thing it does is bring you back to the current state, which would probably be achieved by a reboot anyway if it is a database server.