Closed ZHAOZHIHAO closed 1 year ago
Hi, thanks for reporting.
Could you also show how you do the non-batch inference? I did a simple test and they give the same results, but maybe you are doing in a different way. Please try to provide a more complete example showing the difference.
Best,
Hi, yesterday I was testing with CPU. I'll test it with GPU and see if they are aligned.
While using GPU, I found that the algorithm takes more than 32G memory if I test a batch of images in a sequential manner, which is unreasonable. And "with torch.no_grad():" solves this problem.
Best
Hi,
I think the batch version should be correct. I had different input for batch and sequential before.
Best
Hi,
First thanks for your nice library.
I try to inference a batch like https://github.com/hmorimitsu/ptlflow/issues/28, but the result seems not aligned with no batch inference. My code for batch is as follows:
Best