Open bskinn opened 3 years ago
If I'm going to have to re-implement fuzzywuzzy
's WRatio
as part of this, I might as well do it in a way that exposes the internal weighting values inside WRatio
, so that if someone wanted to they could adjust those values. Have to think about whether there's a way to do that ~cleanly in a plugin system, though...
Scorer should make its own internal decision about multi or not, but suggest should have an option to coerce single or nproc if user prefers, or if the multi-detector is making a bad determination.
The scorer should have a fully inspect able/introspectable API. Defined interface on how suggest will call it. Should be the most general, information rich inputs possible, which is probably the search term and the inventory itself!
Provide index and score flags, too? Threshold? I could see a scorer knowing enough about its own properties to be able to make a quick first pass and discard sufficiently poor matches. Might as well provide all the information possible. Have to set this up so that it's easy to add more information if something new comes up.
SuggestPayload
object with Inventory
and suggest parameters (index, score, thresh... others?) passed to scorer, along with an extra_kwargs
dict of additional arguments that can adjust the scorer behavior.
Best practice... recommend that any scorer, builtin or plugged, define Enums
for use as the keys in extra_kwargs
?
As part of the new implementation of the multiprocessing-enabled WRatio
scorer, perhaps add an option to 'devalue' object match scores if there's no substring match?
E.g., if search not in rst
, then adjust score as
$$ score_{adj} = 100 * (score_0 / 100)^{1+penalty} $$
$penalty$ would likely be a decimal less than one, in most cases.
Either way, keep the legacy
scorer, available under the legacy
id... and call the new one default
, probably.
Can consider using https://pypi.org/project/editdistance/ as a new speedup
extra, integrating into the new default
scorer.
Or --- if it's fast enough with the editdistance
speedup, may be able to avoid writing the multiprocessing-accelerated scorer entirely.
NOTE: With the deprecation of the
python-Levenshtein
speedup for thesuggest
functionality (see #211 & #218), identifying other methods to increase performance is a priority. This multi-processing based approach is the best one I've thought of so far. If anyone has another suggestion, please open a new issue to discuss it.Very early POC on suggest-multiproc branch.
Suggests some speedup possible, but not a slam-dunk whether it's worth it given the sub-2s processing times for most inventories. Properly exploiting
difflib.SequenceMatcher
's caching behavior may change this, however.For comparison, python-Levenshtein is a better speedup for less internal complexity, and doesn't appear to benefit at all from multiproc.
Notes
pool.map()
, without the context managermultiprocessing.cpu_count()
would be a reasonable default pool sizeInventory.suggest()
, likelysuggest
subparsernproc == 1
then skipmultiprocessing
entirelysphobjinv
import-time whethermultiprocessing
is availabledifflib
docs, implementing a bespoke scoring function directly withdifflib.SequenceMatcher
may allow significant speed gains, due to the ability to cache thesuggest
search termdifflib.SequenceMatcher
does not give good suggestions for some searches (e.g., "dataobj" in the sphobjinv inventory), compared to the default,WRatio
-based partial-matcher infuzzywuzzy