Amshaker / unetr_plus_plus

[IEEE TMI-2024] UNETR++: Delving into Efficient and Accurate 3D Medical Image Segmentation
Apache License 2.0
340 stars 32 forks source link

About Flops in Brats #44

Closed qiaoqiangPro closed 1 year ago

qiaoqiangPro commented 1 year ago

Hi, I see in your paper that the FLOPs obtained using nnformer in Brats are about four times larger than your architecture, which is not the case if I calculate it myself in nnFormer, what am I doing wrong?

Thank you for your speedy recovery and good luck!

Amshaker commented 1 year ago

Hi @qiaoqiangPro ,

Please note that the FLOPs are dependent on the input size of the dataset. For instance, the FLOPs for nnFormer for Synapse (input size 128x128x74) are 213.4 G, while nnFormer FLOPs for Brats (128x128x128) are 421.5G.

I am not sure what you did wrong, but we measure the FLOPs as follows:

n_parameters = sum(p.numel() for p in self.network.parameters() if p.requires_grad)
input_res = (4, 128, 128, 128)
input = torch.ones(()).new_empty((1, *input_res), dtype=next(self.network.parameters()).dtype,
                                 device=next(self.network.parameters()).device)
flops = FlopCountAnalysis(self.network, input)
model_flops = flops.total()
print(f"Total trainable parameters: {round(n_parameters * 1e-6, 2)} M")
print(f"MAdds: {round(model_flops * 1e-9, 2)} G")

I hope it is clear now.

Best regards, Abdelrahman.

qiaoqiangPro commented 1 year ago

I tested on BraTs dataset, the code of nnformer joined the method of calculating the calculation amount in your code, and the calculation came out very different from the one mentioned in the paper, so I am rather confused, did you test again the calculation amount of nnformer is very large?

Amshaker commented 1 year ago

We had this discussion by email.

liaochuanlin commented 11 months ago

@qiaoqiangPro Did you reproduce the code for the brain data?