Closed T-10001 closed 6 years ago
Hi, These results of novelty evaluator are normal. You can check the reference paper for more details: "Solving the apparent diversity-accuracy dilemma of recommender systems." Proceedings of the National Academy of Sciences 107.10 (2010): 4511-4515
On Fri, Oct 13, 2017 at 5:31 PM, s3433557 notifications@github.com wrote:
I did some recommendations and found the novelty evaluator gives quite high values compared to the other ranking metrics.
[image: image] https://user-images.githubusercontent.com/20717627/31539755-54774b7c-b055-11e7-8d39-ecdf656a04cf.png
Should they be this high?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/guoguibing/librec/issues/201, or mute the thread https://github.com/notifications/unsubscribe-auth/AQWU7uDPo1xzc9Fu9sfD6v9T6INLIrcKks5sry3jgaJpZM4P4N5D .
Hi, there is now a new different implementation as pull request.
But effect is perhaps the same:
Often 'bad' recommenders in respect of usual accuracy measures have high novelty measure and so on.
Please look at literature...
Your numbers seems to be ok. That are the number of bits you must use to encode the information in your average result list.
There is a problem in current evaluator, because it uses the probability to be in result list instead of the probability to be purchased. So current implementation is only diversity measure, but not the given novelty measure.
But number range is still ok. This is a diversity measure and not a accuracy measure!
I did some recommendations (using default parameters for algorithms and kcv with 10 folds) and found the novelty evaluator gives quite high values compared to the other ranking metrics.
Should they be this high?