Closed kassimi98 closed 3 months ago
Hi, thanks for the question.
1 -- The last one. 2/3 -- Model selection and validation during training are not implemented here. I'm afraid that you would have to implement your own
Thank you , is there a way to evaluate only after (pre-training / main-training) to see the results of the final model on test data using the same metrics in your paper.
Sure. Just use our evaluation pipeline: https://github.com/hkchengrex/Cutie/blob/main/docs/EVALUATION.md
Hi hkchengrex, Thank you for your work.
We trained Cutie during the pre-training phase for 80,000 iterations and ended up with an
exp_id_pretraining_last.pth
file. The model saves a.pth
file every 10,000 iterations, overwriting the previous one.I have a couple of questions: