NVIDIA-Merlin / Merlin

NVIDIA Merlin is an open source library providing end-to-end GPU-accelerated recommender systems, from feature engineering and preprocessing to training deep learning models and running inference in production.
Apache License 2.0
777 stars 118 forks source link

[QST] Help w/ exporting Retrieval Model. #1092

Open Tmoradi opened 10 months ago

Tmoradi commented 10 months ago

❓ Questions & Help

Details

my current set up is on a vertex ai workbench w/ Tesla T4 GPU, 120 GB RAM, 32vCPUs, and using nvcr.io/nvidia/merlin/merlin-tensorflow:nightly container image.

I am currently working on a retrieval based model and depending on the version of the dataset I use, I cannot export the query tower.

The different dataset's that I've been using are as follows:

v3: various continuous and categorical features for both users and the items we want to recommend.

v4: features of v3 + item embeddings generated from sentence transformer.

v5: features of v3 + item embeddings generated from sentence transformer + multi-hot encoding of user history as implicit feedback.

so when training a model w/ either v4 or v5 and save the query encoder, I get the following message: query_tower = model.query_encoder query_tower.save(...)

Screen Shot 2024-01-03 at 12 04 30 PM

when I load the query_tower and then try to turn the model into a top_k_model I get another error. query_tower = tf.keras.models.load_model(".../query_tower_based_on_v5",compile=False) model = mm.TwoTowerModelV2(query_tower, candidate) model.compile() candidate_features = unique_rows_by_features(train, Tags.ITEM, Tags.ITEM_ID) topk_model = model.to_top_k_encoder(candidate_features, k=10, batch_size=1)

error when trying to convert the model to a top k encoder

sorry for cutting the error message off but its talking about the encoder for the query tower.

sorry for being private w/ the data but its not public.

I hope i was able to portray the issue and any suggestions would be appreciated!

rnyak commented 10 months ago

@Tmoradi can you please use nvcr.io/nvidia/merlin/merlin-tensorflow:23.08 and then do

cd  /models
git pull origin main
pip install .

and test again.

if you can give us a toy repro example we can debug better.

Tmoradi commented 10 months ago

Thanks for your reply! I'll get you the toy repo example asap.

Tmoradi commented 10 months ago

Hi @rnyak, sorry for the late reply. Here's what I've done since the last update.

I made a new notebook on vertex ai that uses the container you mentioned to try nvcr.io/nvidia/merlin/merlin-tensorflow:23.08

I ran into the same issues when it came to exporting the query tower.

I wasn't sure this would work, but in the Workflow, I tried to add Tags.CONTINUOUS for the embedding features to see if that did anything and didn't impact results.

I also created a toy dataset(10k sample of V5) and a notebook that goes through the workflow and the different errors I get. Link to repo

Tmoradi commented 9 months ago

@rnyak hello, hope you are doing well. I appreciate that you may be busy, but would it be possible to give an eta for when you are able to help? I am going to try to use pytorch/pytorch lightning instead to see if that helps.

rnyak commented 9 months ago

@Tmoradi please note that adding external embeddings to TwoTower model was not tested, and I can say it is likely that you might get issues. The team does not currently have bandwidth to work on that. you can try to look at the available unit tests (like this one) in case you can adopt them.