Closed hua0x522 closed 9 months ago
Hi @hua0x522 , Than you for your interest in TorchSparse! Did you build the docker container for the artifact evaluation? It looks like you are running it in your local environment. The problem is that you have installed TorchSparse v2.1.0, while you are running the benchmark code for TorchSparse v2.0. ( In the folder of artifact-p1
.)
To run the benchmark code for v2.1.0, you should switch to the folder of artifact-p2
, and remove your change torchsparse/nn/functional/conv/conv.py
. Additionally, I strongly recommend you follow the README.md in artifact-p2
and build the docker container for benchmark evaluation.
Finally, the GPU you are using might be a bit too old (does not support fp16 arithmetics), which means that you may not be able to reproduce the figures in our paper with this GPU.
Thank you.
Thank you for your patient explanation. Now I can correctly execute the AE code in artifact-p2.
Thank you for your patient explanation. Now I can correctly execute the AE code in artifact-p2. By the way, may I ask why the batch size in the Evaluation of TorchSparse++ is set as 1 or 2, instead of larger batch size like 4, 8, 16 ?
------------------ 原始邮件 ------------------ 发件人: "mit-han-lab/torchsparse" @.>; 发送时间: 2024年1月25日(星期四) 下午3:48 @.>; @.**@.>; 主题: Re: [mit-han-lab/torchsparse] [BUG] <problems encountered when reproducing artifact evaluation> (Issue #287)
Hi @hua0x522 , Than you for your interest in TorchSparse! Did you build the docker container for the artifact evaluation? It looks like you are running it in your local environment. The problem is that you have installed TorchSparse v2.1.0, while you are running the benchmark code for TorchSparse v2.0. ( In the folder of artifact-p1.)
To run the benchmark code for v2.1.0, you should switch to the folder of artifact-p2, and remove your change torchsparse/nn/functional/conv/conv.py. Additionally, I strongly recommend you follow the README.md in artifact-p2 and build the docker container for benchmark evaluation.
Finally, the GPU you are using might be a bit too old (does not support fp16 arithmetics), which means that you may not be able to reproduce the figures in our paper with this GPU.
Thank you.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
Hello author, I found that in the evaluation, the Minkunet model output of SPCONV and Torchsparse ++ is different,(artifact-p2 evaluate.py, model output cosine similarity is approximately 0.81 ). I make sure each backend using same input point clouds. And, the cosine similarity between ME and Torchsparse++ output is approximately 0.99.I am not very familiar with this field and may have made some naive mistakes. Looking forward to your reply.
Is there an existing issue for this?
Current Behavior
I have encountered problems when I reproduce AE of TorchSparse++. I downloaded the code from https://zenodo.org/records/8311889 and used the datasets provided of authors, which have been preprocessed.
My GPU is
GPU 0: Tesla V100-PCIE-32GB (UUID: GPU-b57016fe-8dca-4290-b860-a09e19c8fb30)
Before encountering this problem, I got this one firstly:I tried to fix it by just ignore the config passed in
torchsparse/nn/functional/conv/conv.py
:Expected Behavior
No response
Environment
Anything else?
No response