Closed nasirkhan closed 6 months ago
I think there are two things this library does that cause this. There may be more.
1) Block list. As you add stuff to the block list, it can have valid IDs be used at one point before a substring inside has something that wasn't there before, but was added later.
2) To allow having a min length of the ID, I think a separator character allows for junk to be added to the end that is ignored when decoded
Also, for the purpose of the library, the primary key(s) would the result of decoding the ID you get from the user anyway. So the URL would not be ?id=25
, but ?id=kO
(or ?id=kOKT9MVfb048GOitqFhH
with a minimum length of 20) with 25 being the PK value in the database.
@miquelfire thanks for your time to post in this issue! I think you have not read my question. I did not ask the reason(s) for getting duplicates, I wanted to know if there is any plan to fix this.
@vinkla ,
I mentioned in the first comment that I want to use this with numeric primary keys (1,2,3,...). I used the alphabet
mentioned in the Readme/doc and found the first duplicate after 100000. Can you recommend any alphabet that may generate 1M unique IDs?
Sounds like you had the same issue as here
🚧 Because of the algorithm's design, multiple IDs can decode back into the same sequence of numbers. If it's important to your design that IDs are canonical, you have to manually re-encode decoded numbers and check that the generated ID matches.
The above text is mentioned in the ReadMe file. Can you please elaborate on it a little more?
I was planning to use it with the primary key, but if multiple encoded IDs are decoded as the same key, then this might not be useful. Do you have plans to fix this issue soon?
In the previous version of this library (https://github.com/vinkla/hashids) did not mention this issue, is this new to
sqids
?