torralba-lab / im2recipe

Code supporting the CVPR 2017 paper "Learning Cross-modal Embeddings for Cooking Recipes and Food Images"
MIT License
378 stars 89 forks source link

Fine-tuning the pre-trained model on custom dataset #18

Open marielvanstav opened 6 years ago

marielvanstav commented 6 years ago

Hi there! I would like to fine-tune the pre-trained model on a custom dataset. If I understand correctly, I have to train a new word2vec model and skip-instructions model on the custom dataset. Using these models, I create the custom HDF5 file. Then, I can fine-tune the pre-trained model by training it on the custom HDF5 file with the parameter -finetune 1.

Is this the correct way to do this? I’m asking because I’m not sure if I should train the word2vec and skip-instruction model on the custom dataset only, or if these should be trained on a concatenation of the Recipe1M and the custom dataset. Additionally, the main.lua file contains opts.finetune = opts.finetune ~= 0, but I have not been able to figure out how this parameter is used during training. Thanks!