Open utterances-bot opened 4 years ago
Thanks for the article, Marc!
I was a bit surprised to hear about the "standard method". As I understand it, this means that most values in [0, 1) can never be sampled at all. That is, all of the values in [0.5, 1) can be sampled, but only half of those in [0.25, 0.5), and a quarter those in ... and so on. Is that right?
Is there a name for the "other" type of sampling, the one I was assuming was standard, that allows every value to be sampled, even though it means that possible samples are not equidistant?
It is not entirely obvious to me how this introduces bias, because unlike your linked article about mapping N samples into M bins where N % M != 0, it seems samples would go in "evenly" here (once you accept non-equidistance).
Seems like I can't delete my comment, but in any case you've already answered my question years ago. Not the first time that's happened.
LOL. The trick with "dense" style samplings other than being significantly more expensive (relatively speaking) is that they're fragile. Like naively remapping to [-1,1) by 2u-1 doesn't work out & you need a specialized method to hit that range (set sign bit with p=0.5) but like here we'd double the number of samples. That post is mostly intended to be conceptual for carrying through like in the log of a uniform post.
Basic uniform random floating-point values
Micropost on uniform float generation on $\left(0,1\right]$, $\left[-1,1\right)$ and $\left[-1,1\right)$.
https://marc-b-reynolds.github.io/math/2020/06/16/UniformFloat.html