Closed weimisa341 closed 4 years ago
The code measuring FLOPs under each model is using 32000 samples
dummy_input = torch.rand(1, 1, 32000).cuda()
If you take a look at the reported numbers in the paper, the caption specifes: "Table 1: SI-SDRi separation performance for all models on both separation tasks (speech and non-speech) alongside their computational requirements for performing inference on CPU (I) and a backward update step on GPU (B) for one second of input audio or equivalently 8000 samples"
I will update the instructions for getting all the numbers for the computational requirements soon.
The code measuring FLOPs under each model is using 32000 samples
dummy_input = torch.rand(1, 1, 32000).cuda()
If you take a look at the reported numbers in the paper, the caption specifes: "Table 1: SI-SDRi separation performance for all models on both separation tasks (speech and non-speech) alongside their computational requirements for performing inference on CPU (I) and a backward update step on GPU (B) for one second of input audio or equivalently 8000 samples"
I will update the instructions for getting all the numbers for the computational requirements soon.
Thank you for your help!
Hello! I use your code to compute flops , but find the result is different, why? Of course, parameters results is same, I get GFLOPs in the following ways: macs, params = profile(model, inputs=(dummy_input,)) GFLOPs = macs/10**9
for examples in CPU: in ConvTasnet :your GFLOPs is 5.23, my test is 20.5 in SudoRM-RF 1.0x: your GFLOPS is 2.52, my test is 9.87
my test is almost four times to your result As I realised, The GFLOPs is same in different CPU