facebookresearch / dlrm

An implementation of a deep learning recommendation model (DLRM)
MIT License
3.71k stars 825 forks source link

how to inference ./dlrm_s_criteo_kaggle.sh #361

Open Joshmeaning opened 11 months ago

Joshmeaning commented 11 months ago

Hello

I recently read your DLRM paper and decided to try running the code myself. However, I'm a silly teenager who finds this field quite challenging.

I've tried training the model using this command ./bench/dlrm_s_criteo_kaggle.sh --test-freq=1024 and tweaked the values of the Criteo Kaggle dataset to train in various ways. What I'm really curious about is inference.

  1. Does using ./bench/dlrm_s_criteo_kaggle.sh --inference-only execute the inference process simply? (I don't think so, based on the git issues you've responded to in the past. It seems like the model needs to be saved first. Is that correct?)

  2. In the case where --data-generation = dataset what does the input data set structure look like during inference? (Does the original Criteo Kaggle dataset and a dataset where I've manually modified some values have the same input data?)

Thank you.

mnaumovfb commented 9 months ago
  1. If you wanted to see the values predicted by the trained model, you would need to train and save it first. Then load it back during inference. If you are only interested in system performance and therefore using random values is fine, you can call the --inference-only option directly.
  2. The code will simply pick part of the dataset indicated in option --raw-data-file or --processed-data-file, if you pass your custom dataset there, it will use it.