Closed Devil-Ideal closed 1 year ago
You can have a look at the Table. MIRNetv2 uses less parameters and flops as MIRNet. We trained our models on 8 Tesla V100s. If you have a different GPU, you can adjust the batch size to run on your machine.
You can have a look at the Table. MIRNetv2 uses less parameters and flops as MIRNet. We trained our models on 8 Tesla V100s. If you have a different GPU, you can adjust the batch size to run on your machine.
thank you for your reply
hi! I'm very interested in your work, especially the lightweight version published in TPAMI. I would like to know the type of GPU used, and the total GPU memory time required for MIRNETv2 training. I noticed that batchsize is 64 in MIRNETv2 but only 16 in MIRNET. MIRNETv2 uses a larger batchsize and patch, which means more GPU memory is required.