spesmilo / electrum

Electrum Bitcoin Wallet
https://electrum.org
MIT License
7.34k stars 3.06k forks source link

Master key should be seeded with blocking entropy sources #507

Closed clinta closed 5 years ago

clinta commented 10 years ago

Master key generation is done with ecdsa.util.randrange, which gets it's entropy from os.urandom which is a non blocking source of entropy. This means that if a system is low on entropy, urandom will continue to spit out low quality psuedo random data. A function should be used which gets entropy from /dev/random instead, which will block and refuse to output bits if entropy is low.

I think this could be a significant issue if someone is generating wallets offline from a live-boot environment as such environments have much fewer sources for entropy available.

Another even better solution would be to collecting entropy from the user like truecrypt, asking the user to move the mouse around.

dabura667 commented 10 years ago

As someone who uses the offline function on a LIVE USB, I would agree with this.

I hope the mouse-moving function is added at a later time to increase entropy.

EagleTM commented 9 years ago

It's definitely an important enhancement At least on Linux (Live enviornments for example) you could / should run "havenged" as a entropy daemon. Won't help us on Mac / Win

ecdsa commented 9 years ago

this requires GUI interaction; most of the time the GUI will not be needed at all, because the entropy pool is enough.

ecdsa commented 9 years ago

interesting read http://www.2uo.de/myths-about-urandom/

clinta commented 9 years ago

The end of that is particularly important for offline wallet generation:

On Linux, unlike FreeBSD, /dev/urandom never blocks. Remember that the whole security rested on some starting randomness, a seed?

Linux's /dev/urandom happily gives you not-so-random numbers before the kernel even had the chance to gather entropy. When is that? At system start, booting the computer.

On Linux it isn't too bad, because Linux distributions save some random numbers when booting up the system (but after they have gathered some entropy, since the startup script doesn't run immediately after switching on the machine) into a seed file that is read next time the machine is booting. So you carry over the randomness from the last running of the machine.

[...]

And it doesn't help you the very first time a machine is running, but the Linux distributions usually do the same saving into a seed file when running the installer.

The author is correct that in most cases urandom is fine. However for live booting a non-persistent system to generate an offline wallet, blocking entropy is important. It's even more important when generating offline because network timing is one of the sources that is used for entropy, and it's not available.

ecdsa commented 9 years ago

@clinta how does the truecrypt entropy collection differ from using /dev/random ?

ecdsa commented 9 years ago
<ThomasV> gmaxwell: your opinion on https://github.com/spesmilo/electrum/issues/507 ?
<gmaxwell> ThomasV: that complaint is somewhat misunderstanding /dev/urandom (and CSPRNGs) in general. There isn't such a thing as "low on entropy" for such constructs, so long as its ever had sufficent entropy gathered (e.g. 128+ bits) the output will forever be unpredictable-- barring an improbable brake of the interior cryptographic function (and in the case of that you're likely screwed regardless).  Wh
<gmaxwell> at actually _is_ an interesting concern is when the rng has never been initilized at all, linux has a newish syscall that has flags for precisely that case.
<gmaxwell> most of the code out there for "move the mouse around" and such is really horrifying. (e.g. some bitcoin key generator thing simply polled the mouse position a couple times in a tight loop and then combined that with the time.....)
<ThomasV> oh I thought the "mouse moving" was only going to act on /dev/random's entropy estimate
<gmaxwell> Basically, the urandom behavior is really what virtually everything wants. Except for this corner case around initial startup. Really it should be changed to block in that case, but it cant because userspace starts reading it super early in boot and would get stuck.
<gmaxwell> ThomasV: nah thats not reliable. at all. sadly, no reason to believe the mouse activity will be credited against it. Linux went through a cycle of removing randomness credits from drivers for a number of years until it got to a point where basically only the timer interrupt added "randomness".
<gmaxwell> Seems to have gotten somewhat better recently.
<ThomasV> I see
<ThomasV> "please generate timer interrupts to increase your entropy" :)
<ThomasV> gmaxwell: did you know the page I linked at the bottom? is it correct?
<gmaxwell> looking at it now, haven't seen it before.   Yes, it's correct (it simplifies the design of the linux randomness infrastructure, but it points out the simplification)
<gmaxwell> It's also correct about other people's opinions on the subject.
<gmaxwell> Realistically for our usage in generating 'long term' keys perhaps the cost of /dev/random makes sense: just because we shouldn't be wasting our time arguing with panicing frightened users, and there is little risk of the user bypassing the randomness when it does actually block.   (I qualify long term keys because all other places where our program use randomness should _not_ use /dev/random, be
<gmaxwell> cause the blocking will be problematic for sure and may lead to crazy bypassing)
<gmaxwell> Another point that page doesn't point out is that if you do have an application for an information theoretic RNG source, linux /dev/random is very likely non-sutable. Even if there is adequate entropy in it, the output may be still structured enough to make it distinguishable from random to a computationally unbounded attacker.
<gmaxwell> (Thats not our application set in any case; but it's probably an argument that /dev/random basically shouldn't exist. The only applications it might be better for it's still not suitable for.)
fluffypony commented 9 years ago

@clinta in addition to all the above, there is no practical case where waiting for /dev/random is magically less problematic than using /dev/urandom. This /dev/urandom argument seems to come up over and over again with any cryptography software, but the problem is that the "solution" that is pitched is often "use /dev/random", which is bad and they should feel bad.

The correct "solution", as pointed out by Thomas Ptacek a few years ago, is to seed /dev/urandom from /dev/random on startup if you are terribly concerned about /dev/urandom (you can do this by reading from /dev/random and writing it to /dev/urandom). This is mostly what Thomas correctly calls a "rubber chicken security measure", and doesn't do anything except make us "feel better".

So there are two points of practicality that must be considered when having this conversation:

  1. The actual amount of entropy required to seed a PRNG is stupidly small (200 bits). Normal, installed systems are not vulnerable to a lack of entropy except at first install, as they save /dev/random's state between boots (and rekey when possible to prevent a loss of state compromising you very far). Also, the generation of sufficient entropy so as to make /dev/urandom "safe enough" happens within seconds under most conditions. Thus the only three really vulnerable systems are first-boot VMs (the lack of a physical mouse / keyboard can slow down the initial creation of entropy), embedded systems, and, as you point out, a live CD environment such as Tails. Embedded systems (routers, for eg.) are an unlikely place to be running Electrum, so there's no point in trying to fix that. VMs are a possible environment where Electrum would be run, but as it's a more "permanent" environment it's unlikely that someone is firing up Electrum within seconds after boot. Thus we're left with live boot environments, so...
  2. The only real live boot environment worth talking about in this context is Tails. Since Electrum has a bunch of prerequisites or has to be downloaded, it's very unlikely that a first-boot Ubuntu liveCD/liveUSB would have insufficient time to generate entropy. On the other hand, Electrum is part of Tails, so it's reasonable to assume there is some measure of risk there. Here's the thing: Tails knows that people will run cryptographically-sensitive applications, and they deal with the entropy problem accordingly. The way they handle this is by running haveged, which uses the HAVEGE algorithm (well, a more modern implementation, but the same basic idea). Haveged relies on processor tick timing (processor "flutter"), L1 cache miss events, and all sorts of other hardware-level events that are basically unpredictable due to the imperfect nature of hardware manufacturing and the variable nature of attached hardware devices that can trigger all sorts of state changes.

In fact, if you're on a VM or even an embedded device then installing haveged is a pretty-good-idea (with some caveats on VMs, but we're talking generally). Of course, a dedicated hardware PRNG will always be preferable, but practically speaking haveged runs against the AIS-31 test suite, which is what the German Federal Office for Information Security uses to test hardware RNGs, TRNGs, etc.

Of course, haveged is not perfect, but for the only possible edge case where Electrum might be started up in a freshly booted environment with "low" entropy (ie. on Tails) haveged makes it a non-issue.

mperklin commented 9 years ago

@fluffypony,

Your post was both well-written and well-informed.

It's always better to collect entropy from a number of sources and combine them with a cryptographic digest, however you point out that /dev/urandom in almost all cases should suffice. In fact, the CryptoCurrency Security Standard (CCSS) requires a combination of multiple inputs in order to reach CCSS level 3.

You also point out very accurately that the use cases of Electrum help narrow down the situations where it's desirable to augment the entropy collection, and that the most common use case (Tails) makes use of havaged which is an excellent method of augmenting entropy on such systems.

(havaged kicks butt!)

While your post covered the vast majority of cases, I think the initial comments by @clinta, @EagleTM, and others apply to the remaining 1% of cases that your post did not cover. It's theoretically possible that the PRNG in use by the operating system has unknown back doors which make /dev/urandom's output conform to some known values. It's also theoretically possible that someone's /dev/urandom is somehow misbehaving because of a technical issue on the operating system which could have a similar effect.

Put simply, whether it's maliciously or benignly, it's possible /dev/urandom might have an issue in a very small set of situations.

Fixing the OS and/or its PRNG is beyond the scope of Electrum, however GUARANTEEING good entropy for good key generation is within scope.

Adding the ability to collect user-generated entropy either by mouse/keyboard input or by pasting pre-generated input from a TRNG, set of dice, or cards would make for a great addition to Electrum.

A simple textbox can be added when creating a new seed to collect user-generated entropy. The user can then move their mouse/keyboard to watch the textbox fill infront of their eyes, OR untick a box that allows them to paste their own entropy in the textbox themselves (after a sufficiently-clear warning message to ensure the user knows what they're doing)

Regardless of how that textbox was filled, it can then be combined with the existing entropy using a digest and Electrum can carry on its merry way.

fluffypony commented 9 years ago

@mperklin I don't disagree with what you've said, and I do think that a nicely hidden textbox (hidden until revealed by clicking on some button or ticking some box that newbies are unlikely to tick) for those that want to manually provide entropy is not a bad idea. The caution here is that we don't want people to end up using it like a brainwallet. I'm not sure how to prevent that, except maybe to make it numeric only and have a minimum length that equates to ~200-bits?

Also I suppose if we can reliably get a report (under Linux, really, as FreeBSD / OS X don't have this problem) as to the amount of available entropy then we could "suggest" using the textbox. Presumably this would then appear during cold-boot / first-boot situations etc.

Edit: forgot to add that I (personally) hate the way GPG comes to a complete halt during key generation and then suggests the user "use the disks" or "move the mouse" to add entropy. Meanwhile, back at the ranch, this has caused endless issues for people ssh'd into VMs that may have little in the way of disk activity, and no virtualised mouse. I 100% support us figuring out a way to ensure secure key generation, but not at the expense of crippling the user experience that much. We should endeavour to find a reasonable middle-ground if at all possible.

clinta commented 9 years ago

Havenged is great, but tails is not the only live environment which may have electrum installed. Creating a custom live environment is a trivial task today, and there's no indication that doing so with electrum is unsafe. If havanged is essential to securely running Electrum, in a live boot environment, then perhaps havanged should be made a dependency in official packages.

The behavior of GPG, while annoying, is safe. If estimated entropy is low, the program should not be allowed to continue generating keys. This is the behavior I'd expect out of Electrum, default to safe behavior even if it's annoying. Then try to fix the annoyance by adding some entropy collection into the gui if you like.

fluffypony commented 9 years ago

@clinta we can't have it as dependency considering that we're talking a VERY small percentage of runs on a single operating system. I also think that creating a live environment is out of scope, unless a contributor wants to create AND maintain that indefinitely. It's also not acceptable to so fundamentally break the UX that users are frustrated and discouraged from using Electrum, especially since more than 99% of the time this is a non-issue.

I do think that we can attempt to detect whether there is sufficient entropy and act accordingly (by suggesting, but not forcing, the user to manually provide numeric entropy), but beyond that I don't see a reason to overcomplicate things for contributors and users. There are too many corner cases (eg. a hw-rng seeding /dev/urandom directly) to rely on /proc/sys/kernel/random/entropy_avail being 100% correct and taking some forced action.

clinta commented 9 years ago

Do you think the corner cases where entropy_avail is not accurate are more common than cases where it is accurately reporting low entropy?

In the case of frustrating users, as you said, it would be very rare that they would be frustrated, because it would only happen if available entropy is reported as low. In that situation, are the consequences of frustrating a user worse than the consequences of generating an insecure key?

gmaxwell commented 9 years ago

On many Linux systems the entropy reported is almost always zero, because ~only the timer interupt adds to the counter and the maximum is only about 4kbits. For example, I just checked on my laptop that I'm writing this message to you on and see that entropy_available returned 304. Some of the latest kernels are somewhat better.

bThe invocation of "low entropy" as if there were a resource being consumed. This is a technically incorrect understanding of the system. Once initially seeded all of the output is equally strong, up to a computational assumption (which, if violated, likely undermines all security in any case.) In that sense, it is never accurate; except perhaps in the first few moments after booting.

clinta commented 9 years ago

Another aspect of this is that this would never be nearly as annoying as GPG. GPG needs 4096 bits, Electrum needs only 160. If you drain entropy_avaliable by catting /dev/random then watch it, it builds back up to 160 fairly quickly. Within 2-3 seconds on my system.

The correct solution would be for Linux to work like FreeBSD and only block on boot then after boot /dev/random is the same as /dev/urandom, but without that option I still think that for something as important as a bitcoin master key, erring to the side of safety is best.

ghost commented 8 years ago

“The correct solution would be for Linux to work like FreeBSD and only block on boot _then after boot /dev/random is the same as /dev/urandom_…”

(emphasis mine)

Which practically means you’re proposing to use what /dev/urandom provides.

Remember: Both /dev/random and /dev/urandom collect most of their entropy at boot. Then, after boot, whenever the entropy pool is empty (or too low) all attempts to read from /dev/random will block until additional environmental noise (aka “re-seeding data”) is gathered from the system. In contrast to that, /dev/urandom relies on applying the CSPRNG (cryptographically secure prng) on the data it already has to produce “more randomness”. By doing so it is able to bridge the gap to being non-blocking, as it doesn’t wait until it gets more re-seeding data from the system.

FreeBSD and OS X removed the distinction between /dev/urandom and /dev/random, meaning that both devices behave identically. FreeBSD’s kernel crypto RNG doesn’t block regardless of whether you use /dev/random or urandom (unless it hasn’t been seeded – in which case both devices block alike). This is comparable to /dev/urandom functionality on other Linux systems,

…but without that option I still think that for something as important as a bitcoin master key, erring to the side of safety is best.

Fun fact: the CSPRNG on non-FreeBSD (and non-OS X) systems is the same for both /dev/random as well as /dev/urandom. In fact, both only differ in functionality when it comes to seeding and re-seeding. Therefore, there is no reason to even think about using /dev/random.

And no, I do not expect you to believe me just because I say so… instead, simply acknowledge some of the things cryptography experts say about “/dev/random” and “/dev/urandom”. (I’ll spare you the rest of a truckload of links along the lines of this, this, this, this, this and this and simply repeat a quote from Bernstein which can be found at his website:

Is there any serious argument that adding new entropy all the time is a good thing? The Linux /dev/urandom manual page claims that without new entropy the user is "theoretically vulnerable to a cryptographic attack", but (as I've mentioned in various venues) this is a ludicrous argument—how can anyone simultaneously believe that

  • we can't figure out how to deterministically expand one 256-bit secret into an endless stream of unpredictable keys (this is what we need from urandom), but
  • we can figure out how to use a single key to safely encrypt many messages (this is what we need from SSL, PGP, etc.)?

Disagree nevertheless? Here are some tips for the road…

Sometimes, I really have to wonder why tin-foil hatted users don’t simply check the sourcecode of the stuff they rely on instead of building on non-verified assumptions, and trusting that their assumptions are safe enough to build their security upon. Hint: Stop trusting and start knowing!

If you‘re really paranoid enough to think you can’t trust /dev/urandom, then you should not be trusting /dev/random either. Instead, you should be reminding yourself of the fact the Linux kernel supports several hardware random number generators. When used, their raw output can then be obtained via /dev/hwrng. Need entropy? There’s nothing better than using a device dedicated to providing just that. Even I own such a device. I can plug my portable entropy thingy into an USB port quick and easy… but I rarely find myself in a scenario where I need to do so (as I rarely do my crypto stuff on AIX systems).

And for those less paranoid but still worried about entropy, there are software-based solutions like “timer_entropyd”, “randomsound”, and the already mentioned “haveged”. That is, assuming you trust those implementations and the CSPRNGs they use. (Yep, more sourcecode for you to check and verify.)

Hope that helps…

clinta commented 8 years ago

Both /dev/random and /dev/urandom collect most of their entropy at boot.

This is opposite of what is claimed both in the linux source comments and the FreeBSD man page which indicate that at boot the random devices have the least available entropy.

FreeBSD’s kernel crypto RNG doesn’t block regardless of whether you use /dev/random or urandom (unless it hasn’t been seeded – in which case both devices block alike). This is comparable to /dev/urandom functionality on other Linux systems,

I can't find anything indicating that Linux blocks when the system hasn't been seeded. I certainly could be wrong, but I don't see any conformation of that in any documentation or the source code. And that is the concern we are talking about. Live CDs are by their nature unseeded. They do not have a seed from a previous boot in /var/run/random-seed, and if they did that seed would be the same for everyone using that particular live distribution.

both only differ in functionality when it comes to seeding and re-seeding. Therefore, there is no reason to even think about using /dev/random.

Seeding is exactly the concern we are discussing in this issue.

Here's some quotes from your truck load of links:

the only instant where /dev/urandom might imply a security issue due to low entropy is during the first moments of a fresh, automated OS install

On Linux, if your software runs immediately at boot, and/or the OS has just been installed, your code might be in a race with the RNG. That’s bad, because if you win the race, there could be a window of time where you get predictable outputs from urandom. This is a bug in Linux, and you need to know about it if you’re building platform-level code for a Linux embedded device. This is indeed a problem with urandom (and not /dev/random)

It's disappointingly common for vendors to deploy devices where the randomness pool has never been initialized; BSD /dev/urandom catches this configuration bug by blocking, but Linux /dev/urandom (unlike Linux /dev/random) spews predictable data, causing (e.g.) the widespread RSA security failures documented on http://factorable.net.

I can't help but think you simply saw an opportunity to try to be condescending while not actually reading what the concerns voiced in this issue actually are. I agree that on a system that has been running for a time, and has been installed to disk and has had opportunity to get adequately seeded with random data that there is no security difference between urandom and random. But this is not the case with Live CDs. And generating master keys on an offline system via a live cd is an expected use.

ghost commented 8 years ago

This is opposite of what is claimed both in the linux source comments and the FreeBSD man page which indicate that at boot the random devices have the least available entropy.

The fact that they try to collect most of their entropy at boot time does not contradict the fact that that is the time the system provides not much entropy. More on that is mentioned lated in this post.

Seeding is exactly the concern we are discussing in this issue.

… which is why 99% of my “comment” talks about just that (and even closes with some tips which might help someone like you who).

I can't help but think you simply saw an opportunity to try to be condescending while not actually reading what the concerns voiced in this issue actually are. I agree that on a system that has been running for a time, and has been installed to disk and has had opportunity to get adequately seeded with random data that there is no security difference between urandom and random. But this is not the case with Live CDs. And generating master keys on an offline system via a live cd is an expected use.

Well, I am sorry that you opted-in to downgrading my contribution which was actually meant to give you a helpfull heads-up.

To keep it short and cryspy, I’ll only mention that I am amused amazed that you seem to think that around and about 1 hour and 40 minutes suffice to:

  1. read my post,
  2. check and verify the links I provided,
  3. read and understand part of the Linux kernel sourcecode,
  4. do a complete security assessment of those Linux kernel randomness functionalities,
  5. write a reply, and
  6. still are convinced you actually understand what you’re talking about.

Even more amazing is the fact you try to point me to snippets from the links I shared because you seem to think it may put some kind of emphasis on your argument(s). It doesn’t, because…

So I'm the maintainer for Linux's /dev/random driver. I've only had a chance to look at the paper very quickly, and I will at it more closely when I have more time, but what the authors of this paper seem to be worried about is not even close to the top of my list in terms of things I'm worried about.

First of all, the paper is incorrect in some minor details; the most significant error is its (untrue) claim that we stop gathering entropy when the entropy estimate for a given entropy pool is "full". Before July 2012, we went into a trickle mode where we only took in 1 in 096 values. Since then, the main way that we gather entropy, which is via add_interrupt_randomness(), has no such limit. This means that we will continue to collect entropy even if the input pool is apparently "full".

This is critical, because secondly their hypothetical attacks presume certain input distributions which have an incorrect entropy estimate --- that is, either zero actual entropy but a high entropy estimate, or a high entropy, but a low entropy estimate. There has been no attempt by the paper's authors to determine whether the entropy gathered by Linux meets either of their hypothetical models, and in fact in the "Linux Pseudorandom Number Generator Revisited"[1], the analysis showed that our entropy estimator was actually pretty good, given the real-life inputs that we are able to obtain from an actual running Linux system.

[1] http://eprint.iacr.org/2012/251.pdf

The main thing which I am much more worried about is that on various embedded systems, which do not have a fine-grained clock, and which is reading from flash which has a much more deterministic timing for their operations, is that when userspace tries to generate long-term public keys immediately after the machine is taken out of the box and plugged in, that there isn't a sufficient amount of entropy, and since most userspace applications use /dev/urandom since they don't want to block, that they end up with keys that aren't very random. We had some really serious problems with this, which was written up in the "Mining Your Ps and Qs: Detection of Widespread Weak Keys in Network Devices"[2] paper, and the changes made in July 2012 were specifically designed to address these worries.

[2] https://www.factorable.net/paper.html

However, it may be that on certain systems, in particular ARM and MIPS based systems, where a long-term public key is generated very shortly after the first power-on, that there's enough randomness that the techniques used in [2] would not find any problems, but that might be not enough randomness to prevent our friends in Fort Meade from being able to brute force guess the possible public-private key pairs.

Speaking more generally, I'm a bit dubious about academic analysis which are primarily worried about recovering from the exposure of the state of the random pool. In practice, if the bad guy can grab the state of random pool, they probably have enough privileged access that they can do much more entertaining things, such as grabbing the user's passphrase or just grabbing their long-term private key. Trying to preserve the amount of entropy in the pool, and making sure that we can extract as much uncertainty from the system as possible, are much higher priority things to worry about.

That's not to say that I might not make changes to /dev/random in reaction to academic analysis; I've made changes in reaction to [2], and I have changes queued for the next major kernel release up to make some changes to address concerns raised in [1]. However, protection against artificially constructed attacks is not the only thing which I am worried about. Things like making sure we have adequate entropy collection on all platforms, especially embedded ones, and adding some conservatism just in case SHA isn't a perfect random function are some of the other things which I am trying to balance as we make changes to /dev/random.

Source: https://news.ycombinator.com/item?id=6550256

Assuming you actually read what people point you to, you’ll have noticed that issues mentioned by the factorial.net paper (the ones you just tried to boomerang into my face) have been taken care of years ago… back in July 2012. Likewise, other issues have meanwhile been solved too. Actually checking the sourcecode and it’s changes during the past few years would have made this clear to you already.

Friendly tip: some research on your own would have helped you avoid the “relying on outdated information” pitfall you’ve managed to maneuver yourself into. And you can trust in the fact that such research definitely takes more than the 1,5 hours that passed between my post and your reply.

Even though I doubt you’ll take any of my words as an attempt to help with a friendly pad on your back while also trying to keep a focus on the ticket/issue here, I’m still tempted to recommend you (again) to learn about cryptography, entropy,randomness, and current Linux kernel (as well as kernel driver) internals. In case of doubt, you could start by re-reading my post from scratch without prejudice, then check the linked pages, and as a cherry ontop of it read the rest of that ycombinator thread I just mentioned – especially the later might (or might not) be a real eye-opener for you and the issues you think you’re seeing.

Oh, and from one friendly person to another: maybe fine-tune your communication skills just a little bit. See, at first glimpse I almost had the impression your post is some kind of personal attack – because I was trying to help, and not showing an attitude of patronizing superiority as you concluded. For peace’s sake, I’ll assume your post’s wording was not meant to “get personal”, but merely a poor choice of wording.

As for cryptography-related things we’ve been handling here: I honestly wish you all the best with any learning endeavour you may (or may not) be willing to undertake for your own security’s sake. Should questions arise, this crypto-related Q&A website might come handy as well as IACR’s eprint archive. Cheers… I’m out.

clinta commented 8 years ago

which is why 99% of my “comment” talks about just that

Your comment includes pointed out that FreeBSD and Linux's drivers operate differently in an unseeded state. And nowhere do you address why this is not a consern. This is what gives the impression that you had not actually read the conserns pointed out in this issue.

I did read your post and the links you provided. I've also spent a reasonable amount of time researching this before I ever opened this issue. Every bit of information I could find included the caviat that freshly installed systems will have low entropy and /dev/urandom will output poor quality psuedo random values on such a system.

Tyso certainly has authority on this matter, and his comments on hacker news are reassuring. I had not read them before. But finding those changes is hardly obvious even after I know what I'm looking for. The commit comments and source comments for the changes in July 2012 do not make it clear that these changes eliminate the risks of key generation on fresh install.

I don't claim to be able to read and understand the linux kernel source or do a security audit on the code. I'm also not in the business of developing the random driver. I only wish to consume it securely, and in doing so the primary source of information is the man page, which still includes the conserns I brought up when starting this issue. If consuming the random devices in linux requires being able to do a secuirty audit of C code then there is no hope for creating secure software. If you know that the man pages are inaccurate, then perhaps it would be a more efficient use of your time to contribute to the linux documentation.

I don't claim that I'm any authority on this matter, but many smart developers still encourage using /dev/random for persistent key generation. Some examples include dm-crypt and famously gnupg which often leaves people frustrated as key generation stalls waiting for entropy. I think following the example of these widely respected projects is the prudent thing to do. And because electrum only needs to generate a 256 bit number, rather than a 4096 bit prime, it is not likely to frustrate people the way gnupg does, so making this change comes at almost no cost.

ghost commented 8 years ago

… the primary source of information is the man page, which still includes the conserns I brought up when starting this issue.

Please don’t get me wrong, but that is what I (among other things) have been trying to point you to… the man page ”concern” has been addressed several times during the past few years. Fact is that the potential attack described there is a purely “academical” one… meaning: it’s not practical (aka “can not successfully be put to work in RealLife™”).

Actually, the manpage says nothing else:

As a result, in this case the returned values are theoretically vulnerable to a cryptographic attack on the algorithms used by the driver. Knowledge of how to do this is not available in the current unclassified literature, but it is theoretically possible that such an attack may exist.

Translating this to a more human language for better understanding: “Unless you are working for a 3-letter governmental agency and know something the rest of the world doesn’t, this isn’t feasable.”

Note that – while I am writing this – the best attack on SHA-1 (currently used internally as part of /dev/(u)random) is a “freestart collision” attack where researchers managed to break the full inner layer of SHA-1. Using this method, the cost of an SHA-1 collision attack can currently be estimated between $75,000 and $120,000 using computing power from Amazon’s EC2 cloud over a period of a few months.

Also, note that freestart collisions do not directly lead to actual collisions for SHA-1 and – besides that – /dev/(u)random does more than simply applying a SHA-1 hash. Theoretically, SHA-1 may be weak from an academical perspective. Yet, practically, no one has been able to pin-point and successfully exploit any weakness in the real world. (Academics would rejoice if they could – as publishing such findings would be their ticket to cryptographic fame. A company would rejoice if it could – as marketing such findings would put a truckload of cash in their pockets. And governmental agencies would rejoice if they could – but “never confirm, nor deny” things to “safeguard the nation/kingdom/whatever”.)

But let’s get back to the core of things…

To provide you with one last link which might reassure you related to your concerns, I’ld like to point you to an article explaining “the Linux random number generator”. Most interesting for you will probably be the “Theoretic versus Computational Security”, the “/dev/random versus /dev/urandom”, and the “Cryptographically Secure Pseudorandom Number Generators” sections. The (rather long) article includes indeep infos and graphics that show the actual “flow” of the CSPRNG:

https://pthree.org/wp-content/uploads/2014/07/entropy.png

scenario assuming your concerns were valid/correct

Now, I sure hope not to step on your toes by writing all this… but if you’re worried about security, you should also be worried about HTTPS and SSL as electrum uses that too. Reason:

OpenSSL uses /dev/urandom when available. OpenSSH will use OpenSSL's random number source, if compiled with OpenSSL support, which uses /dev/urandom. OpenVPN will also use OpenSSL's random number source, unless compiled with PolarSSL. Guess what? /dev/urandom.

If you’re not sure why I am quoting this specifically: let’s say electrum would be modified to incorporate your suggestion and after booting your system, launching your “modded-for-safety ”electrum, generating some key(s) using /dev/random, creating a transfer, signing it, and then wanting to broadcast that transfer – you’ld notice your “concern” would still be there. It would merely shift towards the possibility of a MITM attack during the broadcast as things would be as “vulnerable” (as your concern implies) due to the usage of /dev/urandom for initiating secure connections and transfers.

So – assuming your were correct – your “better keys” would be rendered “worthless” in this scenario as your broadcasts would still be vulnerable based on your assumptions and concerns (just like every other HTTPS and SSL connection – eg: when using your web browsing client). And I guess we can agree at least on one minor thing: we can’t expect electrum devs to rewrite all available and potentially used SSL libraries out there to make them use /dev/random too all the time. (Which may explain to you why my first post included that closing section going “Disagree nevertheless? Here are some tips for the road…”)

clinta commented 8 years ago

I believe you are still misunderstanding my conserns. My conserns are not about the possible depleting of the entropy pool during normal use. I am not worried about a theoretical attack against the PRNG which might be mitigated by constantly feeding in new entropy. I am not at all worried about /dev/urandom once it has been properly seeded.

The whole point of this issue is to address situations where a key is generated before the PRNG is properly seeded. In that event, urandom will output poor quality numbers, while random will block.

Here are the relevant sections of the man page, which does not present this issue as purely acedemic.

As a general rule, /dev/urandom should be used for everything except long-lived GPG/SSL/SSH keys.

A bitcoin master key is certainly long lived and fits into this recommendation.

If a seed file is saved across reboots as recommended below (all major Linux distributions have done this since 2000 at least), the output is cryptographically secure against attackers without local root access as soon as it is reloaded in the boot sequence, and perfectly adequate for network encryption session keys.

What you suggest about the larger dangers for transactions and https seems to miss the fact that long lived keys have different requirements than transient session keys.

ghost commented 8 years ago

…poor quality numbers,

That does not mean what you think it means. You’re obviously unaware of it, but there is a difference between the terminology within the realms of cryptography and your everyday language interpretation.

To me, this represents yet another clear indication of a knowledge-gap on your side and to be frank: I’ve now reached a point where I not only wrote a small book (:link: :link: :link:), but also decided to give up on explaining things to you. In the unlikely case you wonder about the reason: any effort to help you bridge your lack of knowledge (which would lift your unfounded and non-proven concerns) simply doesn’t make sense when you refuse to listen and/or learn.

So, all that’s left for me to do is to wish you good luck with your “issue”, your unfounded “concerns” and your related “suggestion”. Maybe ask yourself why no one has addressed your “issue” since you posted it 20 Dec 2013. Though, something tells me you won’t like (nor accept) your own answer to that either.

Honestly, if I were Thomas (aka @ecdsa ), I would have closed this as a _“non-issue”_ years ago. ∎

profile for e-sushi at Cryptography Stack Exchange, Q&amp;A for software developers, mathematicians and others interested in cryptography

oittaa commented 6 years ago

Python 3.6 uses the modern getrandom() interface, which works a bit like /dev/urandom in FreeBSD.

https://docs.python.org/3.6/library/os.html#os.urandom

Changed in version 3.6.0: On Linux, getrandom() is now used in blocking mode to increase the security.

rstmsn commented 5 years ago

Is there any reason to assume that an Electrum seed (generated from an air gapped MacOS, installed and booted from USB device) might be vulnerable in light of the above ?

ldz1 commented 5 years ago

Another similar article written by python core developer, Victor Stinner.

http://vstinner.github.io/pep-524-os-urandom-blocking.html

"Python 3.6 changes

The os.urandom() function now blocks in Python 3.6 on Linux 3.17 and newer until the system urandom entropy pool is initialized to increase the security."

Is there a reason to keep this issue open?

ecdsa commented 5 years ago

@ldz1 thanks for the link. this can indeed be closed