Open longlong354 opened 3 months ago
This appears to have been a deliberate change back in 2012. Maybe @kluever remembers the rationale.
Thanks for reply. My fault didn't express the main point clearly enough. The main recommendation is using
optimalNumOfHashFunctions(double p)
instead of
optimalNumOfHashFunctions(long n, long m)
Hey, @longlong354
So, you just want the usage of (double p) and not (long m, long n) as argument for bloomfilter.
The situation now is roughly that we start from $n$, the expected number of entries, and $p$, the desired false positive probability, and we derive $m$, the optimal number of bits, as
$$ m = \lfloor {-n\ \ln\ p } / { (\ln\ 2)^2} \rfloor $$
Then we further derive $k$, the optimal number of hash functions, as
$$ k = (m / n)\ \ln\ 2 \approx {-n\ \ln\ p } / { \ln\ 2} $$
rounded to an integer. You are proposing instead
$$ k = { -\ln\ p } / { \ln\ 2 } $$
removing the factor of $n$. That's a different number. Are you saying that it's more accurate? Could you explain why?
The derivation of the formula
k = (m / n) ln 2 ≈ -n ln p / ln 2
is inaccurate; it should be
k = (m / n) ln 2 ≈ - ln p / ln 2
Please kindly note that the bolded 'n' in the numerator cancels out with the 'n' in m( m = ⌊ − n ln p / ( ln 2 ) 2 ⌋ )"
ps: refering to https://en.wikipedia.org/wiki/Bloom_filter#Optimal_number_of_hash_functions also can be verified in https://krisives.github.io/bloom-calculator/ that n does not affect k
Hey, @longlong354
So, you just want the usage of (double p) and not (long m, long n) as argument for bloomfilter.
yep~
Hi, I would like to take on this issue. I propose optimizing the optimalNumOfBits
and optimalNumOfHashFunctions
methods as follows:
log(2)
and log(2)^2
to avoid recalculating these in each method call, which will slightly improve performance.optimalNumOfBits
to ensure a false positive rate of 0 is handled by setting it to Double.MIN_VALUE
, and use the pre-calculated values for efficiency.optimalNumOfHashFunctions
to calculate the number of hash functions directly from the false positive rate p
, as this is the only factor affecting the result, rather than the number of elements n
and bits m
.
These changes will clarify the logic and improve performance.Please let me know if I can proceed with these modifications!
Sorry, I lost track of this issue.
@longlong354 is right that the algebra in my earlier comment was incorrect, and it does seem as if rewriting the code as suggested would make sense.
I still don't know why the code was rewritten in 2012 to remove the pre-calculation of $\ln\ 2$ and $\ln^2\ 2$. I expect that the JIT compiler is able to inline these constants, but we could reasonably do it explicitly and save it the work.
I think we would accept a PR on the lines of the original comment. If @longlong354 wants to send that PR I think that would make the most sense, and otherwise @Romain-E. The only non-obvious thing I see is how to adjust the corresponding tests in BloomFilterTest
. I think the second test method is no longer relevant but I'm not sure about the first.
I concur that the second test (testOptimalNumOfHashFunctionsRounding
) no longer applies with the new approach and should probably be removed. The first test (testOptimalHashes
) could be adjusted to work with the updated method by testing for different values of p rather than n and m. Alternatively, we could add a method that derives p from n and m and maintain the test in a slightly modified form.
Let me know if this direction works for you, and I’d be happy to proceed with the PR.
Thank u all for the discussion. Pls proceed @Romain-E, thanks in advance.
Hi ! I hope this message finds you well! I wanted to check in regarding PR #7346. I would be grateful if you could take a look whenever time allows, as I am eager to understand any feedback you might have to improve this optimization further.
Thanks again for your time and consideration, and please let me know if there's anything specific I should adjust.
Best regards
Hi ! I hope this message finds you well! I wanted to check in regarding PR #7346. I would be grateful if you could take a look whenever time allows, as I am eager to understand any feedback you might have to improve this optimization further.
Thanks again for your time and consideration, and please let me know if there's anything specific I should adjust.
Best regards
Looks perfect. Benefited a great deal from the changes on max(). Almost fogot static import feature in Java, thanks a lot~
Hello ! Do I need to change anything after this commit ? Or is it good ? I've never worked on this project before.
API(s)
How do you want it to be improved?
1. use static caculated value of log(2) and squared log(2) :
2. calculate optimalNumOfBits by static values:
3. caculate optimalNumOfHashFunction by false positive rate(p) directly and using LOG_TWO :
Why do we need it to be improved?
Example
Current Behavior
as the source codes in “How do you want it to be improved”
Desired Behavior
as the given codes in com.google.common.hash.BloomFilter::optimalNumOfBits(long n, double p) & com.google.common.hash.BloomFilter::optimalNumOfHashFunctions(long n, long m)
Concrete Use Cases
as "How do you want it to be improved"
Checklist
[X] I agree to follow the code of conduct.
[X] I have read and understood the contribution guidelines.
[X] I have read and understood Guava's philosophy, and I strongly believe that this proposal aligns with it.