intuit / fuzzy-matcher

A Java library to determine probability of objects being similar.
Apache License 2.0
226 stars 69 forks source link

Combine Tokenizers for better results #54

Closed dogeweb closed 2 years ago

dogeweb commented 2 years ago

Hi, I had problems using a single tokenizer for matching names. The wordSoundexEncodeTokenizer was matching as equal two different names. I was matching "Caputo" and the MatchService returned "Caputo" and "Chabot" with equal score. The wordTokenizer was skipping "Nikolau" as the correct match was "Nikolaou". The triGramTokenizer was skipping "Leao", when there was a direct match with "Rafael Leao".

I found a temporary solution concatenating the Tokenizers with a custom method:

@SafeVarargs
    public static <T> Function<Element<T>, Stream<Token<T>>> concatTokenizers(Function<Element<T>, Stream<Token<T>>>... funct) {
        return element -> Arrays.stream(funct).flatMap(fun -> fun.apply(element));
    }

and using it like

                .setTokenizerFunction(concatTokenizers(
                        TokenizerFunction.wordTokenizer(),
                        TokenizerFunction.wordSoundexEncodeTokenizer(),
                        TokenizerFunction.triGramTokenizer()
                ))

I'm not sure if this is the correct approach, but I hope that the function is helpful to others with the same problem. If the solution is correct I would like to have it inserted in the library.

The results after the use of the function were as expected and all items matched perfectly, but I would suggest a further development with, if possible, a weight given to the tokenizers or listing the tokenizers in order and when one gives no results the next one is used, to prioritize exact match over like-sounding solutions that may have the same score.

manishobhatia commented 2 years ago

Hi, Thanks for taking interest in this project. I like the idea of concatTokenizer , and I think it can be a useful addition.

I do want to make sure I understand the problem a little better. Here is my take on the issue you laid out

But what I understand in your case, you probably have all these variation of data in the same element, and changing Tokenizers is not an option, but if you add all types of tokens, you have a better chance of matching. Is that correct ?

I do see concatTokenizer a useful add for such scenarios. Just a caution on performance, as you data size grows large, the additional token will slow down the match.

But in any case, I am open to add this into the library. If you would like to do a Pull Request with some unit tests, I can have this out in our next release

dogeweb commented 2 years ago

Thanks for the reply.

But what I understand in your case, you probably have all these variation of data in the same element, and changing Tokenizers is not an option, but if you add all types of tokens, you have a better chance of matching. Is that correct ?

Yes, I have a single data set where I have to match a mix of exact matches, typos and other variations like missing spaces, so the use of a single tokenizer was skipping some. Using a combination of two or three tokenizers got all the cases covered in a single run.

Just a caution on performance, as you data size grows large, the additional token will slow down the match.

I understand, I have a small set of ~200 docs matched against ~500 so I didn't consider performance. I will keep in mind that.

If you would like to do a Pull Request with some unit tests, I can have this out in our next release

Sure, I hope to do it in relative short time.