tiangexiang / BiX-NAS

[MICCAI 2021] BiX-NAS: Searching Efficient Bi-directional Architecture for Medical Image Segmentation
https://bionets.github.io/
37 stars 8 forks source link

GPU memory cost and reported MACs. #2

Open JasonRichard opened 2 years ago

JasonRichard commented 2 years ago

Thanks for your extraordinary work !

At Phase2, it goes well when 15 subnetworks are sampled at iteration 5. But when it goes to interation 4, the subnetworks tripled as 3 networks in pareto front are obtained in previous iter. 45 subnetworks causes an OOM error in a single NVIDIA GeForce RTX 2080 Ti (11GB) in my case. How much GPU memory does it run the whole experiment pipeline ?

I am able to reproduce the MoNuSeg mIoU and DICE for phase1 search. Parameters are the same, but the reported MACs in the training log seems 15-20x as the paper shows. It's because MACs are caculated with a smaller input size (resolution) to align with other works ?

best,

MAGNOLIAw commented 2 years ago

Hi,

Larger GPU memory is required. We suggest that it is better to reduce the network size or only save the model with best acc at the pareto front when using a 11GB gpu.

I checked the MACs on our gpu/cpu again by loading our pre-trained model. MACs and parameters are the same as we reported. However, we found that if we load it on Google Colab gpu, the Macs could be different. MACs can be changed according to different computing architectures. We will add this point in readme.

If you found MACs is different when you use gpu, maybe you can save the trained model and do the inference with cpu (should be quick) to compute the MACs, which should be the same as we reported.

Thank you for your time.

regards, Xinyi