Closed tanghaotommy closed 1 month ago
Hi, can you try to test the fvdb installation by python setup.py test
?
Same issue here.
I tried to compile and run the code on an A100 GPU which works fine. The previous error I got was on a V100 GPU.
I cannot reproduce the same error - when I ran python setup.py test
, it throws "segmentation fault" now. I will report back once I can get a run of this test. I loosely remember last time when I ran the test on V100 GPU, it gave error related to something like "require Ampere GPU". But the authors said in the paper they trained the model on V100, so I don't know.
@tanghaotommy Thanks for your update!
For the paper model, we trained part of it on V100 GPUs. However, with the continuous development of the sparse 3D deep learning library, we decided to set a requirement for GPUs later than Ampere to enable a public release, which is easier to maintain.
For V100 GPUs, you may still use the library, but some lines of code might need to be modified. I will update the README for this information.
Dear authors,
Thanks for open-source the great work!
I was able to install the dependencies, especially the fvdb following discussion in issue #2.
However, when I tried to run the inference for generating a chair by
I had the following error:
I looked a bit further and found out the res_coarse has no voxels inside so res_coarse.normal_features is empty. And this looks like is because the VAE decoder's result is empty (output_x.jdata is tensor([], device='cuda:0', size=(0, 64))) returned from https://github.com/nv-tlabs/XCube/blob/main/xcube/models/diffusion.py#L740. Further debugging, it turns out to be the 3D sparse convolution module returns all 0s (out_feature.jdata) for the out_feature from the fvdb pull request at Line 313 in fvdb/nn/module.py.
I don't know how to debug further as this call triggers the cpp module. I would really appreciate any feedback and insights on this issue, or anything I missed. Thank you very much!