rmartz / suggestion-api

API service for suggesting options based on a user's selections
0 stars 0 forks source link

Determine process for evaluating algorithm effectiveness #12

Open rmartz opened 4 years ago

rmartz commented 4 years ago

The current algorithm for suggestions is loosely inspired by Fourier transformation concepts, in that it uses the idea that complex waveforms can be described as a sum of simpler waveforms. It is believed that, by tracking correlations of user selections in a series of values for each of the selections involved, we can treat the values as probability graphs that can be stacked in a large number of ways and allow us to meaningfully approximate user input that created them.

The expectation is that, as a user makes selections, probability graphs are stacked which will cancel noise and allow us to use the merged probabilities to detect patterns from past sessions correlating with options not supplied by the current user. We will then in turn provide those options as predictions of what the user may be interested in.

However, there are fair reasons to doubt the ability of this process to be predictive. Before we begin searching for ways to address these potential doubts and enhance the logic, however, we should determine a process for evaluating the algorithm's effectiveness, and even what it means for the algorithm to be effective.

Once we have this process in place, we should collect data to test our hypothesis that the algorithm is effective. If that hypothesis is rejected by the data, then we should explore alternative options such as N-term predicate correlations.

rmartz commented 4 years ago

Hypothesis: The algorithm is susceptible to an effect similar to Simpson's Paradox, where correlations that exist within subsets of the data may be reversed when examining all data together.

Proposal if hypothesis upheld: N-term predicate correlations, recording correlations from multiple predicates to a target option. Due to the exponential nature of enumerating all combinations of N terms, these may need to be selected stochastically with an evolutionary process for discovering subsets that have the most meaningful correlations.

rmartz commented 4 years ago

Hypothesis: Due to the combined goals of suggesting high-likelihood options and using suggestions as a mechanism for collecting information, the algorithm experiences a bias against collecting information for expected low-likelihood options, and they may either become permanently handicapped or otherwise under explored.

Proposal if hypothesis upheld: Collect counts on correlation records for every session that influenced the correlation. Introduce a third algorithm mode to prioritize collecting knowledge about the world, by suggesting targets for relevant correlations that have a disproportionately low count.