Research into the current algorithm and how we can adjust it to return a response that is non-binary (match vs non-match). Customers have asked for changes so they can specify a fuzzy area in the matching logic, in which it might be a match and needs to be manually reviewed.
Acceptance Criteria
[ ] A document that describes how we propose to change the algorithm to meet the above requirements.
Details / Tasks
There are quite a few places in the current algorithm that we could adjust to return the concept of "might be a match" to the user. For instance:
If only some passes results in a match, and not all
If the cluster ratio is in a range
If the matching rule is in a range
If only some of the evaluators return a positive result, but not all
By specifying some of our thresholds as ranges
Because there are so many places in the algorithm that can be adjusted to indicated that we're not positive its a match, but it might be. We need to make a recommendation based on where the best place(s) are to do so, and make it clear through the definition of the algorithm configuration as to where that adjustment should be made. Alternatively, it might be possible to just return some sort of confidence interval to the call on how likely we think this is a match, and the customer can make the decision whether a manual review should take place. Lots of options, but we should do some research and determine where the best place to make the algorithm changes are.
Summary
Research into the current algorithm and how we can adjust it to return a response that is non-binary (match vs non-match). Customers have asked for changes so they can specify a fuzzy area in the matching logic, in which it might be a match and needs to be manually reviewed.
Acceptance Criteria
Details / Tasks
There are quite a few places in the current algorithm that we could adjust to return the concept of "might be a match" to the user. For instance:
Because there are so many places in the algorithm that can be adjusted to indicated that we're not positive its a match, but it might be. We need to make a recommendation based on where the best place(s) are to do so, and make it clear through the definition of the algorithm configuration as to where that adjustment should be made. Alternatively, it might be possible to just return some sort of confidence interval to the call on how likely we think this is a match, and the customer can make the decision whether a manual review should take place. Lots of options, but we should do some research and determine where the best place to make the algorithm changes are.
Background / Context