Closed ncsilvaa closed 2 months ago
@ncsilvaa thank you so much for your positive remarks, and glad to hear MABWiser is helpin you in production π
You are spot on with your observation on the runtime requirement of NHood policies. Unfortunately, I don't have an immediate recommendation that can bypass this. We observe the same. There might be some implementation tricks here (caching) but I think the real is issue is the quadratic computation, and we won't get away from that in high-dimensions (like 100 features you mentioned) with some clever coding.. And you already mentioned dimensionality reduction, which would be my first comment as well.
The other "hack" that comes to my mind is to create a hierarchy of bandits with smaller dimensions, so basically neighborhood of neighborhoods to reduce the dimension of distance computation. So instead of 100, sth like 10 times 10 in two different layers, if that makes sense.
The principled approach would be to design an Approximate NHood policy, e.g., taking advantage of very succesfull libraries used in Retrieval. One of best performing ones from https://github.com/erikbern/ann-benchmarks
We have a very detailed write-up on how to add new bandit policies, say NeighboordPolicy.ApproximateNHood() in this case: https://fidelity.github.io/mabwiser/new_bandit.html. This is well-tested by others, as in the past, students have introduced new policies (e.g., Tree-bandit came from an undergraduate student, with a bit of our help in the PR)
This would be an excellent contribution to MABWiser, but I doubt "we" will get there anytime soon (due other priorities..). It is a top feature to add in our perspective so we'll definitely keep this in mind!
I wish I had a magic answer for you, but hope this somewhat helps.
Let us close this for now per the comment above.
Hey everyone!
First of all, congratulations for this amazing work! The code is well-written, very clean, and super useful for both industry and academia. Well done to all of you! π
I was introduced to MAB Wiser during my PhD and I am now using it in a production use case.
In our scenario, we are making real-time predictions according to the current context of a given user. As we want to predict rewards for each arm, we have been using the
predict_expectations
function. The context is an embedding of 100 dimensions where we represent the user preferences. However, we noticed that using aNeighborhoodPolicy
with this contextual representation makes the prediction take too long. In some cases, depending on the bandit algorithm, it takes 20 seconds to make a single prediction. The same scenario, without the neighborhood policy, predicts in 0.002 seconds.It seems that the neighborhood policy is computing the similarity of the current context to all the others in the model during the prediction. We tried to use the LSH policy and define some hash tables to make the process faster, but it is still not suitable. Ideally, to ensure our live predictions, the predictions should not take longer than 0.05 seconds.
Do you have experienced this issue before? Honestly, we only noticed it when we increased the number of dimensions used to represent each context. So far, we are thinking of applying some techniques (like PCA, SVD, etc) to reduce the number of dimensions. But maybe you guys have other suggestions to work around this issue. What about some
lru_cache
to store similarities that were already computed before?Thanks in advance.