cuiziteng / Illumination-Adaptive-Transformer

🌕 [BMVC 2022] You Only Need 90K Parameters to Adapt Light: A Light Weight Transformer for Image Enhancement and Exposure Correction. SOTA for low light enhancement, 0.004 seconds try this for pre-processing.
Apache License 2.0
459 stars 43 forks source link

Questions regarding inference time measurement #60

Closed yiming0416 closed 11 months ago

yiming0416 commented 11 months ago

Hi,

Thanks for your great work! I have a question regarding the time measurement during inference. Since here you are using GPU and CUDA, I don't think using time.time() is a correct approach because CUDA is asynchronous. Instead, I think the following code is the correct way to measure the inference time when using PyTorch and GPU:

start = torch.cuda.Event(enable_timing=True)
end = torch.cuda.Event(enable_timing=True)
start.record()
_, _, enhanced_img = model(low_img)
end.record()
torch.cuda.synchronize()
total_time += (start.elapsed_time(end))

See this refernece: https://discuss.pytorch.org/t/how-to-measure-time-in-pytorch/26964

If I use the code snippet above to measure the inference time, it is way slower than the speed you claim in the paper. I would like to get your opinion on this. Thanks.

cuiziteng commented 11 months ago

Thanks so much for your advice, I got it ~

Also, its OK that you can use this way to measure time and make fair comparisons.