Approximately how many bits of entropy are contained in each byte read from /dev/random and /dev/urandom? Data from /dev/random uses the kernel random-number generator while /dev/urandom uses a CPRNG so although the number of bytes read from each may be identical, the amount of actual entropy read from each would not be identical.
Are there any best practices and/or studies estimating how much entropy is generally contained in each byte of data read from these sources?
For example, if I needed 80 bits of entropy for a secure token, and the token was being generated by reading data from dev/urandom, and if I knew that each byte of data read from /dev/urandom would contain approximately 4 bits of entropy, I could read 20 bytes of data and use a hash function to generate the token.
So long as the CPRNG was ever properly seeded,
/dev/random
and/dev/urandom
have no detectable differences in behavior. It's like the difference between water and holy water -- there's a difference in the way you make them, but no test can tell one from the other.There is no point in getting more bytes of data from a single CPRNG and hashing them. That's precisely what the CPRNG already does internally.
The man page
man 4 random
contains quite a lot of useful information about/dev/random
and/dev/urandom
.If I understand it correctly, the idea is that as long as the overall entropy in your system is high enough, it does not matter: taking more data from
/dev/urandom
than there was entropy should not be a problem, since making use of the fact that the "actual entropy" in for example 2000 bits you use is only 512 bits would require to break the CPRNG or guess the 512 bits, which is still way more than can ever be done by brute-force methods.If you actually want to measure the effect of using data from any of these two devices on the entropy pool, you can read the content of
/proc/sys/kernel/random/entropy_avail
, which contains the current number of bits of entropy: