Open amazingpanpanda opened 1 month ago
Your local machine is 4090?
Your local machine is 4090?
A100
Your local machine is 4090?
A100
Wow. I tested the inference latency testing with our training script and comment code for backward.
Oh I know your meaning. Inference latency means the time for one forward. The test script was forwarded multiple times to get the best performance.
Your local machine is 4090?
A100
Wow. I tested the inference latency testing with our training script and comment code for backward.
So may I ask if the "44ms" mentioned in the third part of Figure 1 of the paper by Faster Speed refers to the time required for backbone inference, rather than the time required for the model to infer a frame of data?
Actually, inference one frame one is sufficient for the application. (Simple interpolating the prediction to the original resolution also performs well). The complex testing process is just for getting a higher number.
Actually, inference one frame one is sufficient for the application. (Simple interpolating the prediction to the original resolution also performs well). The complex testing process is just for getting a higher number.
May I ask how long it takes for you to infer a frame of data during testing?
Do you mean the current default test script?
Do you mean the current default test script?
yes
Do you mean the current default test script?
yes
I think I should be the same as you. As I usually do testing automatically after training (with 8 GPUs), I didn't measure the time and just wait the training end.
Do you mean the current default test script?
yes
I think I should be the same as you. As I usually do testing automatically after training (with 8 GPUs), I didn't measure the time and just wait the training end.
OK, thank you for your answer.
Do you mean the current default test script?
yes
I think I should be the same as you. As I usually do testing automatically after training (with 8 GPUs), I didn't measure the time and just wait the training end.
OK, thank you for your answer.
If you want to save more time, you can reduce the number of augmentation for testing.
Do you mean the current default test script?
yes
I think I should be the same as you. As I usually do testing automatically after training (with 8 GPUs), I didn't measure the time and just wait the training end.
OK, thank you for your answer.
If you want to save more time, you can reduce the number of augmentation for testing.
Yes, I found through testing that even with only one enhancement, its inference time can reach 200ms to 400ms.
Usually, it is good enough. But you know, for benchmarking, we are usually willing to spend more testing time for even 0.1% enhancement.
Usually, it is good enough. But you know, for benchmarking, we are usually willing to spend more testing time for even 0.1% enhancement.
OK