tinyvision / PreNAS

The official implementation of paper PreNAS: Preferred One-Shot Learning Towards Efficient Neural Architecture Search
Apache License 2.0
22 stars 6 forks source link

Question regarding flops calculation #2

Open maryanpetruk opened 1 year ago

maryanpetruk commented 1 year ago

Dear authors, @xiuyu-sxy

I would like to verify the performance measures you have provided in your work, and particularly flops calculation. Could you please let me know how one can reproduce flops numbers from the table in the Readme?

image

Thank you

BeachWang commented 1 year ago

You can use the get_complexity function in Vision_TransformerSuper to obtain the flops numbers.

maryanpetruk commented 1 year ago

You can use the get_complexity function in Vision_TransformerSuper to obtain the flops numbers.

Sure, but I don't understand what the sequence_length should be to get the numbers you have. It is inconsistent for three supernets: tiny, small and base. Could you be so kind as to provide the value for the sequence_length parameter or values if those are different for the three supernets?

BeachWang commented 1 year ago

The image size is 224x224 in ImageNet. The patch size in ViT is 16x16. So I think the sequence_length is 14x14 for three supernets.