Open adhiiisetiawan opened 9 months ago
@adhiiisetiawan
Hello, regarding testing the trained model, I suggest you refer to the run_recbole
function. After using torch.load()
to load the model parameters, complete the test based on trainer.evaluate
.
In addition, the NeuMF model as a general recommender model is generally only suitable for ranking tasks and ranking evaluation metrics such as NDCG and MRR.
Hello @zhengbw0324 , thank you for your response and explanation. Okay, run_recbole
function just for testing right? But, when i want to use for inference, like got list of recommended item for each user, can i use case_study_example.py
for reference? I see there about get to topk item for recommendation
@adhiiisetiawan I was also running into this problem. However, assigning a "mode" for the validation and test set separately in the config is possible. Hence, if I set the "mode" for the test set to "full", the case study will work. And I can still use "uni100" for validation. Suppose the model was already trained with the "mode" setting at "uni100". In that case, it is possible to call "create_dataset()" after loading the model with an updated config to transform the test data into the "FullSortEvalDataLoader" object that has the required attribute. However, I dislike this solution because it could have unforeseen consequences. Ideally, you would set the "mode" correctly right away.
hi @lukas-wegmeth thank you for you answer,
I am currently using RecBole for my recommendation task and have successfully trained a model using custom data from atomic files and the NCF/NeuMF algorithm. However, I am struggling to find documentation or resources on how to perform inference with my trained model. Is there any available resource on this matter? Thank you in advance.
[UPDATE]
I try use case_study_example.py and modified with my model. However, when I execute
full_sort_topk
on line 27, I encounter an error:AttributeError: 'NegSampleEvalDataLoader' object has no attribute 'uid2history_item'
.I think the error because i use
uni100
evaluation mode instead offull
. It's proven when i short train a model again withfull
evaluation mode and ranking based metrics, the inference process run smoothly. But i my case, i wanna use value based metrics, which is can't use full evaluation. Is it possible or any other approach to usefull
evaluation mode in value based metrics? Because to predict need usefull
evaluation mode. Or any other approach to keep usinguni100
evaluation mode in value based metrics?The goal is predict/inference using my model.
Here's the error for details
And here's my config for NeuMF