Closed liangxiaoyun closed 2 years ago
We happened to perform a reproducibility test not long ago where we retrained all the models with provided configs (for the release of v0.2.1), and SDMG-R could achieve at least 0.87 f1 score in 4 trials. Have you compared your training log with ours?
I used 4 GPUs before, and training with 1 GPU is the same as your results.
@liangxiaoyun may be we can scale the batch size when we increase the number of GPU. that should hopefully create a similar result while training on multiple opus
@liangxiaoyun may be we can scale the batch size when we increase the number of GPU. that should hopefully create a similar result while training on multiple opus
... and tune the learning rate. I guess I can close this issue then.
Hello, I directly used the config and data(wildreceipt) you provided to train the SDMG-R model and found that it was 4 points lower than your result. Do you know the possible reason?