Closed razonyang closed 1 week ago
The number of generated tags seems unrealistic, and the existing test covers the underlying problem.
the existing test covers the underlying problem.
It seems doesn't cover the case of comparing different number of taxonomy terms.
If I correctly understand the benchmark above, the speed seems getting slower with large number of terms, maybe it should also be considered a improvable case.
Please close it if I'm wrong, but I see some users to generate a lot of terms on their site.
Thanks for this, but the original benchmarks are good enough. The problem with the original GetTerms
code was obvious, and the benchmarks confirmed the theory.
As to running Go benchmarks, I use this tool:
https://github.com/bep/gobench
... lacks some documentation, but is very useful when comparing branches etc. as it supports benchstat
and pprof
under the hood.
The tag terms on PR #12611 seems too small (
[a,b,c,d,e,f]
), this PR will create a large number of tags to cover more situations. However I just modified the test cases, since I don't have much ability/knowledge to improve it.The result of benchmark on my laptop as follows.
Please close it if I'm understand it wrongly.