I have a small linux box with a dedicated task of reading sensor data and comparing values against a read only (ie i won't write stuff to it, but haven't done anything to make it actually readonly) local database.
The system doesn't need to write anything onto the file system,except maybe some temp files created when the local web service is accessed (web page shows the data, but doesn't log it)
The system autoboots when power is applied, and it is turned off by just cutting power.
Is there a good guide for setting up a system to be cool with abrupt power offs? I am currently using ubuntu (cause i am familiar with it and it runs well on the fit pc 2 that i am using)
Or given that I have no applications writing anything to disk (web server temp files are the exception), can I get away with not modifying the system? The only important point is there is no human to intervene if the boot process throws a question out the terminal and hangs waiting for a response.
Look into fsprotect. When you install it and give a magic command line argument ot the kernel, it automatically modifies the boot process so that the original root filesystem is mounted read-only and the real root filesystem is a aufs (union file system) that lets you make changes that will be discarded upon power off.
When you want to make changes to the system (e.g. for maintenance or upgrades), just omit the special "fsprotect" kernel command line argument. Normally you'd set "fsprotect" to be the default and override by editing the kernel command line from grub when you want to do maintenance.
The package is available in Debian, and maybe others.
This is what journalling file systems are for.
If a failure occurs and the system restarts, it doesn't know if it was half-way through modifying files - so it has to check every single file to find out. With a (meta-data) journalling filesystem, the OS writes out on the disk an outline of the changes it intends to make in the journal. It then applies these changes. At start up, it therefore only has to check the journal to see if there was a write operation in place at the time of the outage. Further, if there is an incomplete operation, then it can decide if it has enough information to complete the change or if it should roll back the change. Most journalling filesystems (ext3,4, XFS, JFS, Reiser) only worry about meta-data - i.e. directory entries. But there are filesystems which replicate plan out the entire write transaction (BtrFS, ZFS). Note that what an operating system perceives to be a write transaction may be different from what your operating system thinks it is (particularly for databases). So even full-data journalling won't catch all the problems. The point is that the filesystem is in a consistent state.
Any filesystem mounted read-only will not require an fsck at boot up (unless it's mount countdown reaches 0).
Beyond that there's nothing you really need to worry about.
Of course there are lots of Linux systems which will boot up and run from RAM. And on just about any Linux distro, you can use tmpfs to create ram drives if you need write storage which does not persist across reboots.