massquantity / LibRecommender

Versatile End-to-End Recommender System
https://librecommender.readthedocs.io/
MIT License
359 stars 62 forks source link

Multi-model inference #500

Open budbuddy opened 1 month ago

budbuddy commented 1 month ago

Hello,

One of the main use cases when deploying real-world models is to train several "expert models" on different subtasks of the main inference task, then using some linear combination of them to do inference.

Example:

Main model = amodel1 + bmodel2

Here the task is separated into two subtasks, and the model used to give recommendations is a linear combination of the two.

Do you have any plans to support this in the library or should it be done case by case for each project? I'm not event sure if there's a clean way to do multi-model inference, especially given that the infer function on the different types of models you offer can work in very different ways. For example I think it's very easy to do for Two Tower, you can pretty much just add the models together and get the expected output, but things get a bit more tricky if you add a model like DIN to the mix.

massquantity commented 4 weeks ago

Hi, I wouldn't recommend doing multi-model inference with the models in this library as they are primarily designed for single tasks.

I'm planning a major upgrade of the whole library, which aims to convert all models to the PyTorch implementation, as the tf1 syntax used now is outdated. Support for multi-task models will also be included, but this new version will be released at least a few months later.

budbuddy commented 4 weeks ago

That's good to hear, personally I much prefer pytorch syntax over tensorflow's. Thank you for the update