Closed abcxubu closed 1 year ago
During the inference, we only used the original decoder, and the other two decoders were only used for training. Therefore, the model size is only computed in the inference time. About the multi-scale MC-Net+ model, we built this version upon the URPC backbones. Specifically, we applied our mutual consistency constraints to train the URPC model. Note that, we here also report the inferenced model size. Thanks.
I see. Thanks for your reply.
在 2023-08-29 14:41:26,"Eli Wu" @.***> 写道:
During the inference, we only used the original decoder, and the other two decoders were only used for training. Therefore, the model size is only computed in the inference time. About the multi-scale MC-Net+ model, we built this version upon the URPC backbones. Specifically, we applied our mutual consistency constraints to train the URPC model. Note that, we here also report the inferenced model size. Thanks.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Thanks for sharing the code. I have a question about the parameter quantity in the MC-Net+. The backbone of the MC-Net+ is the Vet, and MC-Net+ has one encoder and three decoders (by reading your code I found these decoders do not share the weights). In Tab. 2, you said the parameter quantity of both the Vnet and MC-Net+ is 9.44, and the parameter quantity of Multi-scale MC-Net+ is 5.88. Why do the Vnet and MC-Net+ have the same parameter quantity? Why does the Multi-scale MC-Net+ have less parameter quantity? Could you explain about this? Thank you.