Closed Diane0323 closed 3 years ago
Hi @Diane0323, the FLOPs is calculated by thop (https://github.com/Lyken17/pytorch-OpCounter). And yes, the public model is almost our best model, we have written a relatively average result in the letter.
@likyoo Thank you!I use thop like this. The output obtained under c16 is 13.78GFLOPs. I don’t know if I have used it wrong.
model = SNUNet_ECAM(in_ch=3,out_ch=2) x = torch.randn(1, 3, 256, 256) y = torch.randn(1, 3, 256, 256) from thop import profile flop, para = profile(model.cpu(), inputs=[x,y]) print("%.2fGFLOPs" % (flop / 1e9), "%.2fM" % (para / 1e6))
我发现和文章里是2倍关系,但是用乘2吗
非常感谢!
Thank you very much for your excellent work. The FLOPs I calculated are inconsistent with the results in this letter. I want to know how you calculate FLOPs? If it is convenient, can you share the code for calculating FLOPs? And I noticed that the accuracy of the public model is higher than that in this letter. Is this the best model you trained?Or is it different test results caused by different hardware? I'm looking forward to your reply.