Open davidlehn opened 3 years ago
These graphs make it look like the algorithm is linear or super-linear and not quadratic (which it is at much larger values). You might try a graph that shows time per element and show a zoom into 16KB, and then a larger graph out to 16MB to show how the larger numbers go quadratic if there are a lot of carries.
There is some confusion about the above graph. Some more details:
The encode and decode time per element increases as the number of input elements grows. Looking at the output from a benchmark, it looks like it's mostly linear or super linear growth up to 16k elements.
Possible optimizations should be explored for use cases that use larger input sizes. This library uses a generic base-n decoder, so perhaps there are specific base58 optimizations. It might be good to also document this issue in the README so there are not performance surprises compared to simpler base16 and base64 codecs.
Some references (thanks to @msporny): https://stackoverflow.com/questions/28418332/computational-complexity-of-base-conversion https://cs.stackexchange.com/questions/21736/time-complexity-of-base-conversion
Benchmark and output: https://github.com/digitalbazaar/base58-universal/pull/5 t1-16k.csv