Open Garry101CN opened 1 month ago
Hi Zhang, Thank you for sharing your nice work. We found your provided docres.pkl achieving promising results on Doc3D dataset. Then we try to train a new DocRes model on Doc3D based on the same settings(but without parallel training) for 100,000 iterations. The results seem underfit and are far more behind your trained docres.pkl.
Could you please share the training configuration of your docres.pkl model? Like the GPU model/number you used and the total number of iterations you trained for? Additionally, could the lack of multi-GPU parallel training be a possible reason for our model's underfitting?
Best, Gary
As mentioned in our paper, we trained our model on 8 NVIDIA A6000 GPUs with a global batch size of 80. I haven't explored the impact of different batch sizes on this specific task. Are you training solely for the dewarping task? If the dewarping task is trained directly alongside other tasks without pre-training, the performance could degrade significantly. Additionally, are you using the document foreground mask to remove environmental boundaries? This step is also crucial for improving the model's performance on the dewarping task.
Hi Zhang, Thank you for sharing your nice work. We found your provided docres.pkl achieving promising results on Doc3D dataset. Then we try to train a new DocRes model on Doc3D based on the same settings(but without parallel training) for 100,000 iterations. The results seem underfit and are far more behind your trained docres.pkl.
Could you please share the training configuration of your docres.pkl model? Like the GPU model/number you used and the total number of iterations you trained for? Additionally, could the lack of multi-GPU parallel training be a possible reason for our model's underfitting?
Best, Gary