Closed dhardy closed 6 years ago
What is the difference between FastRng
and SmallRng
? Does small mean 64 bits of state?
I completely agree to remove random()
, and to replace weak_rng()
with a wrapper.
I would not add any more wrappers than ThreadRng
, StdRng
and SmallRng
(somewhat as discussed in https://github.com/dhardy/rand/issues/58).
Than we have a clear story about when to use which:
StdRng
: Good statistical quality, unpredictable. An any situations where there could be an adversary, use this one.SmallRng
: Good statistical quality, will usually use (much) less memory and be faster than StdRng
. But it can be predictable with a little effort.ThreadRng
: 'managed' StdRng
, especially useful when you have no place to hold the state of the RNG.When we start to add any more wrappers to differentiate between RNGs, where do we stop? Should we have wrappers to differentiate between the memory-efficient, SIMD-capable ChaCha, and the fast, array based HC-128 (both stream cipher based), and the block cipher based AES-CTR DRBG that may have native hardware support? I think we should have just one wrapper for CSPRNGs: StdRng
. With the same thinking we should have just one wrapper for 'normal' RNGs.
What is the difference between
FastRng
andSmallRng
? Does small mean 64 bits of state?
SmallRng
is a name meant to differentiate it from cryptographic RNGs. It sounds better than 'simple', or 'weak'. And FastRng
may sound like a promise to be faster than StdRng
, which I don't think we can always keep. In any case 64 bits of state is just not enough for many uses.
I agree that it is nice to only have two options, possibly suggesting other, more specialized crates in the documentation.
However, I'm not a fan of the nameSmallRng
. This name only makes sense to me if you know the implementation detail that StdRng
has a large state, and usually you care more about the speed than the state size. I think SmallRng
makes more sense for generators with 64 bits of state, which are useful to initialize generators with a large state.
And FastRng may sound like a promise to be faster than StdRng, which I don't think we can always keep.
Why not? When would StdRng
be faster?
What is the difference between FastRng and SmallRng? Does small mean 64 bits of state?
Yet to be defined and not really on topic here (it's partly a compromise since you two can't seem to agree on which PRNG to make default). But I agree with @pitdicker that it doesn't make much sense having two.
According to the benchmarks HC-128 is ~not much slower~ faster than Xorshift in some cases anyway, though it takes up much more memory. I think SmallRng
is a reasonable name based on this:
test gen_bytes_hc128 ... bench: 441,867 ns/iter (+/- 15,706) = 2317 MB/s
test gen_bytes_xorshift ... bench: 1,138,435 ns/iter (+/- 65,539) = 899 MB/s
test gen_u32_hc128 ... bench: 2,943 ns/iter (+/- 252) = 1359 MB/s
test gen_u32_xorshift ... bench: 4,383 ns/iter (+/- 142) = 912 MB/s
test gen_u64_hc128 ... bench: 4,309 ns/iter (+/- 255) = 1856 MB/s
test gen_u64_xorshift ... bench: 2,879 ns/iter (+/- 129) = 2778 MB/s
test init_hc128 ... bench: 4,518 ns/iter (+/- 308)
test init_xorshift ... bench: 23 ns/iter (+/- 0)
Size of Hc128Rng: 4176
Size of XorShiftRng: 16
You can also see the benchmarks I did on my ARM v7 phone where Xorshift is definitely faster, if you want more context. Any way you look at it though, the biggest reason not to use a cryptographic generator would seem to be the size of the PRNG; this is more significant if multiple PRNGs are used.
I think it is going to be difficult for you @vks and me to find agreement, because we look at it in a different way.
I think we can basically agree that all the known RNG algorithms are looking for the best balance between generating numbers as fast as possible, with simple code, and having the numbers look as 'random' as possible. The quest for performance is a very practical concern, and that which mostly interest @vks. The quest for quality is what has driven many of the 'big names' and developments behind RNGs, like George Marsaglia and Pierre L’Ecuyer, and is what I prefer.
As I have said again, I have nothing (well, almost) against Xoroshiro128+. It is great for many situations, like generating many floating-point values. Both Xoroshiro128+ and PCG definitely have a place.
On one hand, I care quite a bit about performance (@dhardy can confirm the effort I have spent optimizing stuff in rand
:smile:). But the raw performance of the RNGs does not seem to matter all that much in the end... I have collected a few benchmarks:
Raw performance generating u64
:
test gen_u64_xoroshiro_128_plus ... bench: 1,079 ns/iter (+/- 3) = 7414 MB/s
test gen_u64_xoroshiro_mt ... bench: 1,236 ns/iter (+/- 24) = 6472 MB/s
test gen_u64_pcg_xsh_64_lcg ... bench: 3,578 ns/iter (+/- 30) = 2235 MB/s
test gen_u64_pcg_xsl_128_mcg ... bench: 1,467 ns/iter (+/- 18) = 5453 MB/s
Adding a tiny bit of use: converting to an f64
:
test gen_f64_xoroshiro_128_plus ... bench: 1,316 ns/iter (+/- 3) = 6079 MB/s
test gen_f64_xoroshiro_mt ... bench: 2,659 ns/iter (+/- 18) = 3008 MB/s
test gen_f64_pcg_xsh_64_lcg ... bench: 3,852 ns/iter (+/- 2) = 2076 MB/s
test gen_f64_pcg_xsl_128_mcg ... bench: 1,788 ns/iter (+/- 1) = 4474 MB/s
Generating an f64
range:
test range_0_10_xoroshiro_128_plus ... bench: 2,442 ns/iter (+/- 15) = 3276 MB/s
test range_0_10_xoroshiro_mt ... bench: 2,891 ns/iter (+/- 9) = 2767 MB/s
test range_0_10_pcg_xsh_64_lcg ... bench: 3,897 ns/iter (+/- 52) = 2052 MB/s
test range_0_10_pcg_xsl_128_mcg ... bench: 2,857 ns/iter (+/- 26) = 2800 MB/s
Generating numbers in a distribution (Normal
using the Ziggurat algorithm)
test gen_normal_xoroshiro_128_plus ... bench: 5,224 ns/iter (+/- 16) = 1531 MB/s
test gen_normal_xoroshiro_mt ... bench: 5,595 ns/iter (+/- 25) = 1429 MB/s
test gen_normal_pcg_xsh_64_lcg ... bench: 7,344 ns/iter (+/- 32) = 1089 MB/s
test gen_normal_pcg_xsl_128_mcg ... bench: 6,746 ns/iter (+/- 52) = 1185 MB/s
Generating integers in a range:
test gen_u64_xoroshiro_128_plus ... bench: 2,486 ns/iter (+/- 25) = 3218 MB/s
test gen_u64_xoroshiro_mt ... bench: 3,086 ns/iter (+/- 25) = 2592 MB/s
test gen_u64_pcg_xsh_64_lcg ... bench: 4,170 ns/iter (+/- 7) = 1918 MB/s
test gen_u64_pcg_xsl_128_mcg ... bench: 2,671 ns/iter (+/- 28) = 2995 MB/s
So converting the output of the RNG into a number you can actually use in your code often already halves the performance, and then you are not even using it meaningfully yet! Still, in these benchmarks Xoroshiro128+ is often 20% faster than PCG-XSL-128, and that is not a number to ignore. But that number only keeps getting lower when you consider the real-world surrounding code.
I think a must read when talking about RNG quality is the paper '"Good random number generators are (not so) easy to find"](http://random.mat.sbg.ac.at/results/peter/A19final.pdf) by P. Hellekalek. The best way to make sure an RNG is suitable for many kinds of different uses is by measuring how well it performs on the RNG test suites designed for this (TestU01 and PractRand). And to be honest I don't know how all the tests they perform map to which real-world situations.
To be honest again there is one thing that I have against Xoroshiro, and that is its design history. Sebastiano Vigna did valuable work by improving existing Xorshift variants. His method was to run the TestU01 benchmarks countless times, to figure out the best constants to use in Xorshift* and Xorshift+. But this is basically optimising an RNG to pass a specific test suite. When another one came along, PractRand, it failed instantly. And also when others began testing and evaluating Xorshift+ and Xoroshiro+ it turned out to pass tests a less often. This indicates there are situations for which Xoroshiro128+ is not the best choice.
PCG was also designed in combination with the TestU01 test suite, but with a much stronger theoretical basis. It really strives to remove any kinds of patterns. And, as in proper testing, kept weakening the RNG to figure out how much headroom there is before the tests actually fail. This explains why when the other test suite came along (PractRand) there where no problems.
One argument to prefer better quality over performance is one I have seen in several other discussion in the Rust forums. One aspect is very easy to test and measure, the other very difficult. It is very easy to measure whether the PRNG you use is a performance bottleneck. It is very hard to test and measure whether the random numbers that get generated are as 'random' as you expect them to be. That is why I would like to see both, but make the 'safest choice' the default.
Edit: one more think I would like to mention when it comes to quality is that I have spent effort to make the float conversion code, generating bool
s, and generating integer ranges avoid the weaker bits of Xoroshiro128+. There where real algorithms that needed changing. But I wanted rand
to be safe to use as much as possible with RNGs like Xoroshiro128+.
I think you're getting off the original topic there @pitdicker. Yes, I can confirm you have done a lot of optimisation work! I didn't want to discuss which algorithm we pick here because we already have at least two threads about that — perhaps you'd like to copy your post elsewhere?
@vks are you happy to use the name SmallRng
now? Really small size and single-value-generation are the defining characteristics of these "non cryptographic PRNGs" and I prefer categorising by positive characteristics, not negative ones (you don't select an algorithm for an application based on which properties it doesn't have).
Yet to be defined and not really on topic here (it's partly a compromise since you two can't seem to agree on which PRNG to make default).
I agree it is better to have just one. I don't feel strongly about the default, PCG is close enough to my personal optimum, and it is very easy for me to use another RNG.
@vks are you happy to use the name SmallRng now?
Yes, I'm convinced now. It is a good name to distinguish such generators from faster ones with larger state (e.g. xorshift1024 is faster than xorshift128).
I added a tracker to the top. Anyone want to implement the first two items?
I'm interested. I'll try to get a PR done until Monday.
Should I remove the functions or just deprecate them?
Another problem is that weak_rng()
is not equivalent to XorShiftRng::new()
, because it uses thread_rng()
to initialize the generator.
Just deprecate if possible; you can see several examples of deprecation.
Good point; I advised using thread_rng()
primarily because we were using OsRng
at the time which can fail; we now have the same fall-back logic for new()
though so XorShiftRng::new()
is as reliable — but it will be slower if initialising many instances due to extra system calls. I'm not sure; perhaps we should just recommend using new()
for odd use-cases but using thread_rng
if already in use or if many instances are desired.
I replaced every weak_rng()
with SmallRng::from_rng(&mut thread_rng()).unwrap()
. This is a bit verbose, but it makes it more clear that the thread_rng
can/should be reused.
Implemented by #296.
Hello there! I landed here after noticing that random()
was deprecated. Is there a discussion about this where I can read up on the arguments in favor of removing random()
?
If there is no more discussion about random()
s removal than what happened in this thread, I'd like to contest that decision! rand::random()
was awesome! Sure, it's just a small convenience function and thread_rng().gen()
is not much longer, but it makes heck of a difference!
gen()
you still have to use rand::Rng;
, so rand::random()
is much shorterrandom()
is awesome for several reasons:
thread_rng().gen()
looks like maybe the programmer wanted to specifically use the thread rng. I'd be genuinely sad if rand::random()
were to be removed forever...
I think the main arguments for removing it were that thread_rng().gen()
makes it obvious which components of Rand are used and hence makes alternatives more obvious. random()
is cool but prone to be used in sub-optimal ways — although that doesn't always matter.
Any further thoughts on this? It's not impossible to reverse the decision now — in fact if we do, much better to do so before the 0.5 release.
@LukasKalbertodt Good that you tracked it down to this issue!
We mostly discussed weak_rng()
here, random()
did not receive much love. I guess we just considered it redundant and a bit of a strange thingy in the API.
Personally I would much rather see users of Rand use thread_rng()
, as it makes it a bit more explicit how the random numbers seemingly come out of thin air. And that there are trade-offs involved that you may want to know about. On the other hand providing a scripting language like high level function can have something nice.
Anyway I don't care strongly t.b.h.
For a little nice history: rand::random()
was added in October 2012 with this PR https://github.com/rust-lang/rust/pull/3642, based on the comment "A single rand::random() -> uint
function would be nice too."
Thanks for reopening this issue! :)
random()
is cool but prone to be used in sub-optimal ways
I have to admit that I don't know a huge lot about rand
and which RNG is good for what. Right now I can only think of one reason why random()
might be suboptimal: the programmer doesn't need high quality randomness and thus could use a faster RNG. Are there more situations in which random()
is clearly suboptimal?
And in the "slow" situation: nothing really bad happens here. If the speed is actually important, the programmer will notice and will presumably read the documentation of random()
afterwards. As long as the docs clearly state this downside and explain everything, I don't really see a problem.
Personally I would much rather see users of Rand use
thread_rng()
, as it makes it a bit more explicit how the random numbers seemingly come out of thin air.
I don't really think we gain a lot from writing thread_rng().gen()
explicitly. The only thing made explicit here is that we are using a "thread rng". Programmers not familiar with the topic would probably not gain any important knowledge from this name alone.
I guess we just considered it redundant and a bit of a strange thingy in the API.
But there are plenty of other functions in rand
which are also "redundant" and more high level than the building blocks (Rand
and Rng
).
And by the way: I wouldn't defend a rand::random() -> uint
like back then. But the fact that random()
is generic and Rust has this awesome type inference, makes random()
a really awesome thing in my eyes. In what other programming language can you write
if random() {
println!("Heads!")
}
... and it just works?
That's a good argument, and I don't feel strongly about this — also, I don't believe we ever had a strong reason for removing it (as @pitdicker says, it didn't receive much love from us).
Does someone want to open a PR? It's probably as simple as removing the deprecation and updating documentation, but also see:
random
Another problem about random
is that it is slightly less efficient than reusing the RNG, making it kind of an antipattern.
@vks That can be kind of fixed transparently with a lazy_static
. Trade-offs may not be worth it though..
This needs to stay, the rand lib is already confusing enough to use 😄
Fwiw, as a drive by reader of this issue thread_rng().gen()
makes it much clearer what is happening under the hood than random()
. I'd go for callsite clarity over a little bit of convenience any day, so I'd vote for removing random()
.
Right now I can only think of one reason why random() might be suboptimal: the programmer doesn't need high quality randomness and thus could use a faster RNG. Are there more situations in which random() is clearly suboptimal?
That can be kind of fixed transparently with a lazy_static. Trade-offs may not be worth it though..
The idea is that random
is/was nothing more than thread_rng().gen()
. An easy optimization when you use multiple values is to do instead:
let mut rng = thread_rng();
for _ in 0..10 {
let value = rng.gen();
/* do something... */
}
This way the RNG does not have to be looked up from thread local storage on every iteration.
Maybe one argument against random()
. How often do you really want a value that spans the entire range of an integer type? Or that is exactly between 0.0
and 1.0
for floats?
The Rng
trait has, in my opinion, three main methods:
gen
to return a random value that somehow fits with the type;gen_range
to sample from a specific range of values;sample
for all situations where you need something slightly more complex.I expect gen_range
to be used at least as much as gen
.
A stand-alone random()
function is convenient.
But it already hides that it uses ThreadRng
, which is useful for many situations but also the most complex piece in Rand. It uses a cryptographical RNG, can do system calls, even fall back to entropy collection by itself for seeding, and uses 4kb of thread-local storage.
Could it on top of that also make the methods from the Rng
trait a little less discoverable, which are there to prevent rewriting functionality (which is hard to test, and happens to be quite prone to bugs)?
This needs to stay, the rand lib is already confusing enough to use :smile:
I hope not too much :disappointed:. When I look again at the documentation, we probably should improve at least the first page to give more help for anyone who doesn't know any of the concepts. It now dives into technical parts pretty quickly...
@SirVer as a 'drive by reader' thread_rng().gen()
looks like syntax noise with very little meaning, it's cryptic and strange-looking. Why so many abbreviations? why is it chained like that? what do threads have to do with anything? If you're familiar with the API i'm sure it makes plenty of sense, but not everyone is familiar with every API; sometimes when we read code we just want to glance at something to know roughly what's going on...thread_rng().gen()
is obscure af.
The main thing I'm looking for when I'm in need of randomness is:
This should be as straight forward as possible.
I would argue you create random numbers in two ways:
random()
is really nice for this.Rng
argument and punt the issue to the user of my API.I'm completely lost when it comes to the deprecated Rand
trait. I've no idea how to make my structs usable with an rng. Rand was pretty straightforward but now I just don't know.
Sorry that was sort of off-topic, perhaps following @banister some bikeshedding: perhaps it's better to use the word random
in places of the gen
abbreviation. Eg. thread_rng().random()
kind of looks ok.
I think it's nice to have this convenience function:
random()
than thread_rng().gen()
when reading. It's better to keep non-performance-critical code more readable.So I'd vote in favor of keeping it :)
@pitdicker yeah :( unfortunately the documentation for rand
is some of the worst in the rust ecosystem when you factor in how central this crate is, and how quickly a newbie will reach for it when they want to write some demo rust programs.
Which is incidentally exactly why the random
function should absolutely stay in the crate.
Yes, I'm perfectly fine with rand::random()
being the obvious way to make a random number, just as long as it says in the docs that it is simply sugar for thread_rng().gen()
and points users to things like thread_rng().gen_range(0, 10)
.
This needs to stay, the rand lib is already confusing enough to use :smile:
The problem is that most of the documentation is written by people who already have some understanding of random number generators and Rust, without much feedback from those with less background. If you have some feedback on how to make the documentation clearer / more to-the-point and less confusing, then please open an issue and let us know.
A trait I can implement for my own types in order to create random instances of my type.
This is a different topic, but hopefully this will get you started. We seem to have lost the example implementing this for your own type unfortunately. The reason this isn't such a simple concept is — because it's not. A little background in probability theory will tell you that "in order to create random instances of my type" is a woefully inadequate specification — e.g. if you were to pick a "random year" you would have to at least pick upper and lower bounds, and might consider some years more "interesting" than others :laughing:
I'm taking another try at the documentation. @dhardy Sorry, as you already put a lot of effort into it. After every previous improvement there always seems room for some more...
I posted a link to this thread on Reddit to get a few more people involved... I didn't quite expect this number of people ;-)
EDIT: This benchmark is probably pretty flawed and its results pretty useless. You should probably just skip the rest of my comment ;-)
To get some hard data, I wrote a small benchmark.
random_*
just calls rand::random()
.local_thread_rng_*
stores an thread_rng()
as local variable and calls gen()
thenlocal_std_rng_*
stores an StdRng::from_rng(EntropyRng::new()).unwrap()
as local variable and calls gen()
then
StdRng
doesn't reseed itself automatically, so it is not equivalent to the other two versions. However, even the docs mention "reseeding is done as an extra precaution against entropy leaks and is in theory unnecessary". So it doesn't seem like one would loose any real quality of the randomness.The results (generating u8
, u32
, f64
and [u64; 8]
):
test local_std_rng_u8 ... bench: 2 ns/iter (+/- 0)
test local_std_rng_u32 ... bench: 2 ns/iter (+/- 0)
test local_std_rng_f64 ... bench: 4 ns/iter (+/- 0)
test local_std_rng_u64x8 ... bench: 32 ns/iter (+/- 2)
test local_thread_rng_u8 ... bench: 3 ns/iter (+/- 0)
test local_thread_rng_u32 ... bench: 3 ns/iter (+/- 0)
test local_thread_rng_f64 ... bench: 5 ns/iter (+/- 0)
test local_thread_rng_u64x8 ... bench: 36 ns/iter (+/- 2)
test random_u8 ... bench: 7 ns/iter (+/- 0)
test random_u32 ... bench: 7 ns/iter (+/- 0)
test random_f64 ... bench: 8 ns/iter (+/- 0)
test random_u64x8 ... bench: 35 ns/iter (+/- 6)
Honestly, I'm quite surprised how freaking fast the PRNG is. So fast in fact that the little extra work done by random()
does make a noticable difference (at least for types that are very easy to generate).
While my benchmark couldn't show what I initially wanted to show with it, I still stick to what I said before. Sure, random()
is not perfect and if want random numbers "in production", you probably don't want to use random()
directly. But again: it's fine for a big number of uses cases.
And yes, the documentation of random()
should explain its disadvantages and point to other way of generating random numbers. But maybe random()
is even good in this way: maybe with random()
we can "catch" all people who never worked with rand
and explain important concepts there.
Three things about the benchmark:
u32
, then reads requests from there, so most calls do almost nothingu8
has the same cost as getting a whole u32
fs::read_to_string will be stable as of Rust 1.26, so there’s definitely a precedent of having these sorts of functions in std.
My main use of the rand
crate is creating mocks in tests. I love rand::random
for this, and I'd be unhappy to see it disappear. I think its very ergonomic, useful, and important for people who need some small amount of casual randomness and don't need to worry about benchmarking their rng usage.
Also concerned that it seems like Rand
has been removed. My ideal use for random is something like this:
#[derive(Rand)]
struct MyStruct {
...
}
let thing: MyStruct = rand::random();
This is the kind of excellent API that makes me very happy to program in Rust. I do not think it is worthwhile to make me learn about all the performance trade offs between different sources of randomness, or to consider a question of distribution, before I can create a random thing. Most of the time, I just don't care. If it matters to me, I will be self-motivated to learn about this and replace my code with the optimal choice for my use case.
@dhardy yeah I understand my "generate a random instance of my type" is whofully underspecified, part of the problem being that I don't know any better. The times I wanted it I really just wanted to forward the rng/distribution my struct's members, eg struct Foo { i: i32, f: f32 }
: a random instance of my type is a Foo instance of random i32 and f32.
Trying to stay a bit more on the topic of this issue:
Looking at C#'s Random API, with Rust twist seems quite pleasant:
The function random() -> impl Rng
returns an Rng seeded by the system secure rng, the returned rng does not have thread local state. The other constructor to provide a seed can also work eg. random_seeded(seed: &[u8]) -> impl Rng
(type of seed argument tbd).
What I'm trying to say is, highly technical terminology and extreme flexibility are fine as long as there's some straight forward convenience methods that map the casual user's expectations.
Part of this suggestions is to drop ThreadRng as the default rng, instead favouring a simple rng initialized with a seed from the system's secure entropy. The specific type of rng returned is not that important, as long as it no longer depends on thread local state.
meta: Is this too far off-topic? I'm not sure. I'm suggesting to change random<T>() -> T
(return random instances of T) to random() -> impl Rng
(return an rng). This isn't quite what this issue is about though...
I made a PR to undeprecate random()
, and took a stab at making the documentation a bit more friendly (at least the landing page of the API reference).
Thanks @pitdicker
@CasualX @withoutboats The discussion about deprecating Rand
and rand_derive
is here. We tried to get feedback; most of it ends up being about something else.
random() -> impl Rng
Why? To me random
sounds more like "make a random value" than "get a random value generator".
We could rename thread_rng
→ local_rng
or thread_local_rng
or something, but I don't see much benefit, though I realise people may find it confusing that the default randomness source is called thread_rng
. Or we could rename to strong_rng
or crypto_rng
or just rng
... but the former names are just going to make people wonder where to get randomness when they don't need cryptographic strength, and rng
is likely to be used as a field or local variable name.
We already have another type of generator if you don't want one tied to thread-local storage: SmallRng::from_entropy()
(similar to the old weak_rng()
). This is fine, but there's not normally much reason to use it, hence why thread_rng
is the default choice.
@dhardy I was looking at the C# random API where Random is the class name to generate random values, analogous to a random()
function to construct the rng. I know it seems odd, but the random
identifier is just too good to not use.
The approach C#'s random API takes also solves the performance issue of wrapping thread_rng().gen()
in a function as there would be no more wrapping and to the user it's clear what to do next (either they read the docs after discovering random()
or the IDE autocompletes the methods on the returned rng).
In short, random()
to return an Rng solves several issues: the API is discoverable (search 'random'), performant: no repeated calls to get the rng and usable: random().|
(that | is a cursor) will have the IDE autocomplete methods that you want. Further it follows an established API giving some confidence it can work.
@CasualX yes... personally I still hate the idea of using random
almost as a verb (to get the thing which makes random values), but your argument has a certain sense to it.
Another thing which crossed my mind is that some applications may not want to use thread_rng
— it uses a lot of memory (for e.g. embedded) and is slow to initialise (insignificant in large apps, but maybe an issue in some cases). Right now avoiding this is hard if dependency libs use it. We could however add a feature flag to the Rand crate to use different back-ends instead; e.g. ChaCha uses much less memory and is much faster to initialise, but has slower generation, while SmallRng
(and several other RNGs) are small and fast, but not secure. With this in mind we could instead have two functions:
strong_rng()
: normally StdRng
(Hc128Rng
) but optionally instead ChaChaRng
weak_rng()
: normally also StdRng
(the same instance) but optionally instead SmallRng
These would replace thread_rng
but have the same implementation (thread-local auto-reseeding ...).
thread_rng
is already widely used?@dhardy I think if there are very specific performance requirements, the RNG should be explicitly chosen by the user. Why add so much magic? If we want to have the thread-local auto-reseeding for more RNGs, why not provide it in a generic way?
Libraries should probably never assume any RNG and just consume a &mut Rng
given by the user (i.e. the application).
Why add so much magic?
Because there are roughly two main cases, and this caters for them.
Libraries should probably never assume any RNG and just consume a &mut Rng given by the user
Many uses of an RNG are not to "create a random value" (like Distribution::sample
) but some internal use; e.g. did you know that HashMap::new()
uses an RNG? The only practical API in my opinion is a global function, and since we don't really want to use an unsafe extern(C)
function declaration, allowing users to choose an implementation is non-trivial. Also, app developers may sometimes make poor choices (e.g. not realising that some part of their API requires an unpredictable random number).
@LukasKalbertodt If you want "quick and dirty but can't cover all the cases" look no further.
@dhardy you can now feel free to abstract and specify to your heart's content ;P
@Lokathor thanks, but I think there's still benefit in making this lib easy to use!
@dhardy Why not have an opaque wrapper type then? This is nicer than strong_rng() -> impl Rng
, which can't be embedded in structs.
Yes, you're right. But this issue has run it's course and I don't think there's any hurry to start tweaking thread_rng
(there's no current breaking change, so lets wait until the next release).
I voted not, but I think it should be removed. The main problem would be backward compatibility. So maybe after introducing "editions"?
There's not a strong compatibility issue since we're pre-1.0. Anything below 1.0 can have any change with only a minor version bump.
Decision:
random()
functionweak_rng()
function withSmallRng
wrapperSmallRng
(but off topic here, thus not a blocker for this issue)Both the
weak_rng
and therandom
function are little helpers, but don't really do much. To be clear, in the next release users may simply:random()
withthread_rng().gen()
weak_rng()
withXorShiftRng::new()
(or a different PRNG)The first is pure convenience (but also a minor anti-pattern since caching the result of
thread_rng
can improve performance).The second has a small advantage: users may specify simply that they want a fast, weak PRNG, and don't mind if the implementation changes.
XorShiftRng
is reproducible, meaning that library updates won't change its output;weak_rng
is not reproducible, meaning the library may replace the internals with better generators in the future.Note that currently
weak_rng()
returnsXorShiftRng
which makes changing the former a breaking change; we should introduce a wrapper around the generator used (i.e.fn weak_rng() -> WeakRng
) or drop the function and only use a wrapper type (WeakRng::new()
).Note also that
thread_rng()
is staying because it is widely used and has additional features.We already wish to change the
weak_rng
algorithm and have two candidates (see https://github.com/dhardy/rand/issues/52 and https://github.com/dhardy/rand/issues/60), so it's possible we could have two variants, e.g.FastRng
andSmallRng
. This issue is not about which algorithm we choose.random()
convenience function?weak_rng()
or rename or only keep wrapper types, or remove completely?