5

When I put:

sudo watch tail /proc/sys/kernel/random/entropy_avail && dd if=/dev/random of=/dev/zero

It shows that available entropy is "consumed".

What I can't understand is:

Why does generating random consume entropy? Why does generating random bits cause that entropy to become useless?

hunter
  • 3,965
  • 6
  • 28
  • 42
Carol
  • 163
  • 4
  • Short answer: That's just how /dev/random works in Linux. If you read from it then it subtracts from an entropy counter how many bits you requested. It doesn't need to work that way. It's disagreed upon whether this should be the behavior of /dev/random. As long as at least 128 bits of entropy have been in the system since boot time (or since the last compromise of kernel memory) then both devices produce unpredictable output. – Future Security Sep 28 '18 at 21:01
  • The getrandom system call with flags == 0 is preferable, in my opinion. It only blocks before there is enough entropy to securely seed a CSPRNG. After that it ignores the entropy counter and produces RNG output without blocking. The urandom, random, and getrandom interfaces all use the same source of randomness. The OS continues to incorporate extra entropy into that randomness pool, in case entropy was over-estimated or kernel memory was compromised. – Future Security Sep 28 '18 at 21:18
  • @FutureSecurity If dev/random didn't decrement the entropy as it's used up, wouldn't that then be an infinite source of entropy? And this question moot? – Paul Uszak Sep 29 '18 at 21:55
  • 1
    Entropy isn't used up. – Future Security Sep 29 '18 at 22:18

2 Answers2

10

Why generating random consumes an entropy?

For the same reason that to generate true random bits using coin throws, one needs to throw a coin for each bit generated. Any attempt to use less throws is doomed (against a computationally unbounded adversary, or if the technique relies on few simple bit combinations).

Why generating a random bit causes that entropy to become useless?

Revealing a generated random bit is what makes it useless towards generating further secure random bits.

There's a solution: generate a sufficient number of random bits (like 256 or more) and use these, exclusively, to seed a Cryptographically Secure Pseudo Random Number Generator which will supply an endless stream of bits that have all testable properties of truly random bits. That's part of the strategy of Unix's /dev/urandom

fgrieu
  • 140,762
  • 12
  • 307
  • 587
  • If one has a CPRNG seeded with e.g. 256 bits, and its state is never exposed, by what practical means could an adversary given an unbounded number of bits from it deduce anything about any future bits? The biggest reasons I see for needing an inflow of entropy would be to ensure that the speed with which entropy enters a system exceeds the speed of side-channel state leakage. – supercat Sep 28 '18 at 15:33
  • To be technical, doesn't /dev/random also use this strategy? – Captain Man Sep 28 '18 at 17:20
  • @supercat Cryptographic security is an assumption. Consider state recovery for a traditional PRNG such as Mersenne-Twister. – conchild Sep 28 '18 at 17:52
  • @conchild: If a good CSPRNG with e.g. 1024 bits of state is known to have been loaded with an initial state with 256 random bits and 768 bits of the mathematical constant phi, and the first 10,000 or so bits of output are discarded, what practical benefit could an attacker gain from knowing 768 out of 1024 bits of initial state? If the CSPRNG only kept 256 bits of state, that could be a weakness, but after 10,000 rounds a CSPRNG that's any good should pretty effectively dilute the known aspects of state. So what practical attacks would there be? – supercat Sep 28 '18 at 18:06
  • 1
    @supercat The difference is that with a CSPRNG, you have to assume that it is good mathematically. There's no mathematical proof that can show that they are indistinguishable from a RNG, just that we haven't found a means yet. The goal of an entropy pool is to strive to produce bits that are considered physically impossible to predict. One depends on assumptions of mathematics, the other depends on assumptions of physics. – Cort Ammon Sep 28 '18 at 21:39
  • @CortAmmon: Certainly mixing in additional entropy is a good idea, for a variety of reasons. From a practical perspective, though, I would think that the amount of entropy required would depend upon how often use of the pool passes between entities that don't trust each other, rather than upon the amount of data each consumer pulls from it. – supercat Sep 28 '18 at 22:06
  • @fgrieu, in https://www.blackhat.com/us-15/briefings.html#understanding-and-managing-entropy-usage it is highlighted that OpenSSL PRNG set up seed only once (during initialization). It does not reseed after an initialization. I cannot decide if it is secure or it is not. – Carol Sep 29 '18 at 16:25
  • @fgrieu, so I understand that if CSPRNG leverages /dev/urandom for taking an entropy it does not need to reseed because 1) it CSPRNG and 2) seed was not exposed to another processes in the system because /dev/urandom/ ensures an exclusive access, yes? – Carol Sep 29 '18 at 16:25
2

As random numbers are generated the Linux RNG wants to reseed itself. This reseeding consumes entropy. Reseeding is designed to provide both prediction resistance and backtracking resistance. Many things generate random numbers and therefore can consume entropy on a system, just starting a process uses random numbers (think ASLR) and as these things occur entropy consumed.

There was a presentation called Understanding and Managing Entropy Usage at Black Hat a few years back. Think you might find it interesting.

You also might want to look into how an RNG like Fortuna is implemented. Fortuna uses multiple entropy pools for reseeding.

Swashbuckler
  • 2,053
  • 10
  • 8