Closed lgabs closed 6 months ago
Hi @lgabs,
As far as I am aware, different models serve different purposes and each has its own tradeoffs. While we can actually run experiments to benchmark all models, some metrics may not be applicable for some models (e.g. ranking models).
Let me know if I'm misinterpreting this.
Thanks!
That's true, but it's common to approach a recommendation problem and test several models against several metrics and see tradeoffs (including training/test times etc). Cornac already has some examples in the very beginning of the README, but while I thought it could be nice to see a bigger comparison like the algorithm comparison in microsoft recommenders, each problem and dataset itself will have different models reaching better metrics.
Yes, I think it's good to have multiple benchmarks for different types of models. Just that we have not gotten enough resource to prioritize this effort 😃
Description
Are there any benchmarks considering all models included in cornac? I've counted 62 models so far in the README.