🌕 [BMVC 2022] You Only Need 90K Parameters to Adapt Light: A Light Weight Transformer for Image Enhancement and Exposure Correction. SOTA for low light enhancement, 0.004 seconds try this for pre-processing.
Apache License 2.0
459
stars
43
forks
source link
Questions regarding inference time measurement #60
Thanks for your great work! I have a question regarding the time measurement during inference. Since here you are using GPU and CUDA, I don't think using time.time() is a correct approach because CUDA is asynchronous. Instead, I think the following code is the correct way to measure the inference time when using PyTorch and GPU:
If I use the code snippet above to measure the inference time, it is way slower than the speed you claim in the paper. I would like to get your opinion on this. Thanks.
Hi,
Thanks for your great work! I have a question regarding the time measurement during inference. Since here you are using GPU and CUDA, I don't think using
time.time()
is a correct approach because CUDA is asynchronous. Instead, I think the following code is the correct way to measure the inference time when using PyTorch and GPU:See this refernece: https://discuss.pytorch.org/t/how-to-measure-time-in-pytorch/26964
If I use the code snippet above to measure the inference time, it is way slower than the speed you claim in the paper. I would like to get your opinion on this. Thanks.