NVIDIA-Merlin / Merlin

NVIDIA Merlin is an open source library providing end-to-end GPU-accelerated recommender systems, from feature engineering and preprocessing to training deep learning models and running inference in production.
Apache License 2.0
758 stars 113 forks source link

Quick-start for ranking with Merlin Models #915

Closed gabrielspmoreira closed 1 year ago

gabrielspmoreira commented 1 year ago

This PR is a port of the #988 PR that was originally in models repo, then ported here as the quick-start involves different Merlin libraries: NVTabular, models, and systems

Fixes #916 , fixes #986 , fixes #918, fixes #680, fixes #681, fixes #666

Goals :soccer:

This PR introduces a quick-start example for preprocessing, training, evaluating and deploying ranking models. It is composed by a set of scripts and markdown documents. We use in the example the TenRec dataset, but the scripts are generic and can be used with customer own data, provided that they have the right shape: positive and potentially negative user-item events with tabular features.

Implementation Details :construction:

Testing Details :mag:

Tasks

Implementation

Experimentation

Documentation

You can check the Quick-start for ranking documentation starting from this main page

github-actions[bot] commented 1 year ago

Documentation preview

https://nvidia-merlin.github.io/Merlin/review/pr-915

rnyak commented 1 year ago

@gabrielspmoreira One think I think we can improve is the prediction step. I tested the script you shared with me for prediction but it retrains the model.. but is there a prediction script that user can feed the saved model path and then do the batch predict automatically without training again? It'd be better if we can provide an example code snippet how one can do the prediction.

gabrielspmoreira commented 1 year ago

@gabrielspmoreira One think I think we can improve is the prediction step. I tested the script you shared with me for prediction but it retrains the model.. but is there a prediction script that user can feed the saved model path and then do the batch predict automatically without training again? It'd be better if we can provide an example code snippet how one can do the prediction.

Indeed. Following your suggestion, I made it possible to save the trained model with --save_model_path, then run the script again providing --load_model_path, in this case not providing train_data_path but just --predict_data_path, so that the script loads the trained model and just perform the batch predict, saving them to --predict_output_path