Closed hua0x522 closed 3 months ago
@ys-2020, could you please take a look at this issue when you have time? Thanks!
@ys-2020, could you please take a look at this issue when you have time? Thanks!
hello, the .whl file on the sever still is unreachable.
There's no bug here. I think your batch size is too large for your GPU VRAM. Reduce the batch size so your model can fit in memory.
Is there an existing issue for this?
Current Behavior
I've tried to run the evaluate.py in AE of TorchSparse++, which can indice the flag 'batch_size'. However, if I set the batch_size >= 8, it will report "CUDA error: an illegal memory access was encountered". If the batch_size is 1 to 6, it can execute normally. the error log is:
Expected Behavior
No response
Environment
Anything else?
No response