Closed subzey closed 4 years ago
Hm, there is definitely a market for this task. But also there are a few possible questions, which can reduce marker share:
++prevId + random()
?Yup, predictability it's the largest showstopper. The readme advertises unpredictability and sequential numbers is definitely a bug ...for a main entry. But, say, it could be probably okay for non-secure
.
Or probably this feature is something this project was not designed for and is something it should not do. If you decide this would be a scope creep and close this issue, I'm totally okay with that!
Adding a random value to the previous value in a finite field is more or less just generating a new random value with extra steps (if I got your idea right). Much like the Vigenère cipher with the advantage equals zero.
Or did you mean adding a value in a range that is much smaller that the entire "keyspace" set size?
The uniqueness is kept just the same way: with some very high probability (but not certainly) the initial state for each of the clients differs. Sure, the later sequential regions may overlap, but the same can happen with the each-time-random values as well
I think the best option is to have separated nanoid-seq
project as we have for non-offensive ID generation and dictionaries. We will promote nanoid-seq
in the main docs.
The current implementation is that each new id generated is completely random. Is this an implied requirement or is it something that can be dropped?
These are unique ids.
These are also unique IDs. They're not random, but still unique. The sequentially incrementing IDs with a random initial state like that can be used as an
X-Idempotency-Key
or a distributed DB key or... maybe in the most cases when Nano ID is used.The code for generating ids this way can be small and fast as it doesn't need to deal with the entropy pool too much. See the example