If I run this command in Ubuntu
sudo cat /proc/sys/kernel/random/entropy_avail
it returns a number that indicates how much "entropy" is available to the kernel, but that's about all I know. What unit is this entropy measured in? What is it used for? I've been told it's "bad" if that number is "low". How low is "low" and what "bad" things will happen if it is? What's a good range for it to be at? How is it determined?
Your system gathers some "real" random numbers by keeping an eye about different events: network activity, hardware random number generator (if available; for example VIA processors usually has a "real" random number generator), and so on. If feeds those to kernel entropy pool, which is used by /dev/random. Applications which need some extreme security tend to use /dev/random as their entropy source, or in other words, the randomness source.
If /dev/random runs out of available entropy, it's unable to serve out more randomness and the application waiting for the randomness stalls until more random stuff is available. The example I've seen during my career is that Cyrus IMAP daemon wanted to use /dev/random for the randomness and its POP sessions wanted to generate the random strings in APOP connections from /dev/random. In a busy environment there were more login attempts than traffic for feeding the /dev/random -> everything stalled. In that case I installed rng-tools and activated the rngd it had -- that shoveled semi-random numbers from /dev/urandom to /dev/random in case /dev/random ran out of "real" entropy.
If you want a simpler overview of the underlying issue: Some applications (such as encryption) need random numbers. You can generate random numbers using an algorithm - but although these seem random in one sense they are totally predictable in another. For instance if I give you the digits 58209749445923078164062862089986280348253421170679, they look pretty random. But if you realise they are actually digits of PI, then you would know the next one is going to be 8.
For some applications this is OK, but for other applications (especially security related ones) people want genuine unpredictable randomness - which can't be generated by an algorithm (i.e. program) since that is by definition predictable. This is a problem in that your computer essentially is a program, so how can it possibly get genuine random numbers? The answer is by measuring genuinely random events from the outside world - for example gaps between your keypresses and using these to inject genuine randomness into the otherwise predictable random number generator. The 'entropy pool' could be thought of as the store of this randomness which gets built up by the keystrokes (or whatever is being used) and drained by the generation of random numbers.
Entropy is a technical term for "Randomness". Computers don't really generate entropy but gather it by looking at stuff like the variations of hard drive rotation speeds (A physical phenomena that is very hard to predict due to friction etc.) When a computer wants to generate a pseudo random data it will seed a mathmatical formula with true entropy that it found by measuring mouseclicks, hard drive spin variations etc. Roughly speaking
entropy_avail
is the measure of bits currently available to be read from/dev/random
It takes time for the computer to read entropy from its environment unless it has cool hardware like a noisy diode or something.
If you have 4096 bits of entropy available and you cat
/dev/random
you can expect to be able to read 512 bytes of entropy (4096 bits) before the file blocks while it waits for more entropy.For example if you “
cat /dev/random
” your entropy will shrink to zero. At first you'll get 512 bytes of random garbage but it will stop and little by little you'll see more random data trickle trough.This is not how people should operate
/dev/random
though. Normally developers will read a small amount of data, like 128 bits, and use that to seed some kind of PRNG algorithm. It's polite to not read any more entropy from/dev/random
than you need to since takes so long to build up and is considered valuable. Thus if you drain it by carelesslycat
ting the file like above you'll cause other applications that need to read from/dev/random
to block. On one system at work we noticed that a lot of crypto functions were stalling out. We discovered that a cron job was calling a python script that kept initializingramdom.random()
on each run which which ran every few seconds. To fix this we rewrote the python script so that it ran as a daemon that initialized only once and the cron job would read data via XMLRPC so that it wouldn't keep reading from/dev/random
on startup.You can read more at: http://linux.die.net/man/4/random