Open dbolser opened 2 days ago
I think you could probably create a much faster version for your custom use case. Our shuffle assumes that we need to move the items around in memory and this actually is the most expensive step.
As an example if you know the positions of the (few) true values you could just run a bijective function on only those values and write them into a zero'd array. You can actually call the bijective function we provide for power of two length sizes and if the value falls outside of your array size you can feed it back in until you get something inside your range. This will not collide with any other value and is a valid bijection.
Hi, if I want to shuffle a vector of length 100 million (say 1 billion times), is this a suitable approach? The reason I thought that fisher Yates was a good choice was because I only have a few True values in an otherwise False boolean vector (e.g. from 0.001% to about 1% True). Using this info, I can just shuffle the N True indexes in a single pass with Fisher Yates...
Occasionally I have a vector with, say, 50% True, in which case a 'full shuffle' makes sense... but they are always big 😅.
Huge thanks again,