fidelity / mabwiser

[IJAIT 2021] MABWiser: Contextual Multi-Armed Bandits Library
https://fidelity.github.io/mabwiser/
Apache License 2.0
213 stars 42 forks source link

Predict and Predict_expectation difference in results #38

Closed mrStasSmirnoff closed 2 years ago

mrStasSmirnoff commented 2 years ago

Hello, First of all, I would like to say thank you for your work on that package, which considerably simplified the work on contextual bandits! I was looking at your package to use in my pet project but had a hard time understanding the returned results. In the attached screenshot one can see that the first 5 values returned by the "prediction" method do not match with the maximum of arm rewards for the first 5 rows which I would expect them to match...Could you please explain to me what is the logic here..?

Screenshot 2022-02-10 at 15 00 55
skadio commented 2 years ago

Hi @mrStasSmirnoff

Thank you for sharing positive feedback -- we are glad to hear that the library is helpful on your cases!

The behavior changes depending on the Learning Policy (and its combination with Nhood policy as in your case). This behavior is not always "pick the max". That strategy would be %100 exploitation. Some learning policies allow exploration where the bandit literature comes into the picture.

In its simplest case; a random learning policy will return 5 random predictions, pure %100 exploration

In the case you shared, Thompson Sampling is the culprit behind the exploration that tries to give arms, that is not the best, a chance to be selected. As a result, you can see some arms, that are not deemed for the highest expectation, are selected. Btw, this seemingly random behavior should be "deterministic" when using the same seed.

If you change the example, to use Epsilon Greedy (instead of TS) and then set the epsilon parameter to zero, the results should give you the arm with max expectation at all times. Hope this helps!

mrStasSmirnoff commented 2 years ago

Hello @skadio

Thanks for your reply and clarification of the model behavior! Upon your suggestion, I ran the experiment with Epsilon Greedy having 0 epsilon, and have gotten my maximum from each arm. I will gladly close the issue, but before I did it, maybe you could give me some hints on where I can get a bit more background on the listed Learning Policies and Neighbor policies particularly since not all of them are intuitive for me (i.e. Radius policy, etc)? What would be also interesting is to have a look at the "context" features after the training meaning, which of the features has contributed the most to the result evaluation? I didn't find any info or methods on that matter in the repo...

skadio commented 2 years ago

Thanks for your reply and clarification of the model behavior! Upon your suggestion, I ran the experiment with Epsilon Greedy having 0 epsilon, and have gotten my maximum from each arm.

Glad to hear that it worked as expected.

could give me some hints on where I can get a bit more background on the listed Learning Policies and Neighbor policies particularly since not all of them are intuitive for me (i.e. Radius policy, etc)?

The best place to start would be our paper to go into the background and inner workings of these policies: https://www.worldscientific.com/doi/abs/10.1142/S0218213021500214

See also the references in the README.

What would be also interesting is to have a look at the "context" features after the training meaning, which of the features has contributed the most to the result evaluation? I didn't find any info or methods on that matter in the repo...

Feature selection/importance is an orthogonal topic to Bandits. You can check out the Selective library as a start

https://github.com/fidelity/selective

mrStasSmirnoff commented 2 years ago

Wow, big thanks for the provided links I will go through them carefully! Once again, thank you for your comprehensive responses.