VAST-AI-Research / TripoSR

MIT License
4.61k stars 535 forks source link

module 'torchmcubes_module' has no attribute 'mcubes_cuda' #3

Open Daniel9D opened 8 months ago

Daniel9D commented 8 months ago

Traceback (most recent call last): File "D:\3d\TripoSR\run.py", line 154, in meshes = model.extract_mesh(scene_codes) File "D:\3d\TripoSR\tsr\system.py", line 185, in extract_mesh v_pos, t_pos_idx = self.isosurface_helper(-(density - threshold)) File "D:\3d\TripoSR\env\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "D:\3d\TripoSR\env\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(args, **kwargs) File "D:\3d\TripoSR\tsr\models\isosurface.py", line 45, in forward v_pos, t_pos_idx = self.mc_func(level.detach(), 0.0) File "D:\3d\TripoSR\env\lib\site-packages\torchmcubes__init__.py", line 12, in marching_cubes return mc.mcubes_cuda(vol, thresh) AttributeError: module 'torchmcubes_module' has no attribute 'mcubes_cuda'. Did you mean: 'mcubes_cpu'?

tried to re-install and still getting error,using windows

bennyguo commented 8 months ago

https://github.com/VAST-AI-Research/TripoSR/issues/1 could be related. Could you please try the solution here https://github.com/VAST-AI-Research/TripoSR/issues/1#issuecomment-1977877761 and if it doesn't work could you please provide the output of pip freeze?

chrisbward commented 8 months ago

tried that, no dice

1 could be related. Could you please try the solution here #1 (comment) and if it doesn't work could you please provide the output of pip freeze?

chrisbward commented 8 months ago

could this help? https://github.com/myavartanoo/3DIAS_PyTorch/issues/2#issuecomment-1554085491

bennyguo commented 8 months ago

could this help? myavartanoo/3DIAS_PyTorch#2 (comment)

I think the current requirements.txt does install from the git repo.

runshengdu commented 8 months ago

i hope the tripo team can fix it :(

yuhuangyue commented 8 months ago

I found a tricky way..

You can turn to mc.mcubes_cpu to export your mesh

Modify the code in models/isosurface.py

    def forward(
        self,
        level: torch.FloatTensor,
    ) -> Tuple[torch.FloatTensor, torch.LongTensor]:
        level = -level.view(self.resolution, self.resolution, self.resolution)
        v_pos, t_pos_idx = self.mc_func(level.detach().cpu(), 0.0)
        v_pos = v_pos[..., [2, 1, 0]]
        v_pos = v_pos / (self.resolution - 1.0)
        return v_pos.cuda(), t_pos_idx.cuda()
bennyguo commented 8 months ago

@Daniel9D @chrisbward @yuhuangyue Are you using a virtual environment? Which kind of virtual environment are you using (virtualenv / venv)?

bennyguo commented 8 months ago

Could you please try setting up the environment with virtualenv instead of venv? For me it worked fine with virtualenv but encountered the same problem with venv.

nanjingzhouyu commented 8 months ago

Could you please try setting up the environment with virtualenv instead of venv? For me it worked fine with virtualenv but encountered the same problem with venv.

i create venv in pycharm, how to do that?

rkfg commented 8 months ago

Try to clone https://github.com/tatsy/torchmcubes locally and do pip install -v . inside (with venv active of course). You might see the actual error that prevents it from compiling the CUDA version. In my case I had a different CUDA version used with torch (12.1) and installed in my system (11.8), I use conda btw, not venv, but in this case it doesn't matter. To reveal the actual error open setup.py in the cloned repo and change except: to this:

except RuntimeError as e:
    print(e)

Then try pip install -v . again.

nanjingzhouyu commented 8 months ago

Try to clone https://github.com/tatsy/torchmcubes locally and do pip install -v . inside (with venv active of course). You might see the actual error that prevents it from compiling the CUDA version. In my case I had a different CUDA version used with torch (12.1) and installed in my system (11.8), I use conda btw, not venv, but in this case it doesn't matter. To reveal the actual error open setup.py in the cloned repo and change except: to this:

except RuntimeError as e:
    print(e)

Then try pip install -v . again.

where is the setup.py

nanjingzhouyu commented 8 months ago

I found a tricky way..

You can turn to mc.mcubes_cpu to export your mesh

Modify the code in models/isosurface.py

    def forward(
        self,
        level: torch.FloatTensor,
    ) -> Tuple[torch.FloatTensor, torch.LongTensor]:
        level = -level.view(self.resolution, self.resolution, self.resolution)
        v_pos, t_pos_idx = self.mc_func(level.detach().cpu(), 0.0)
        v_pos = v_pos[..., [2, 1, 0]]
        v_pos = v_pos / (self.resolution - 1.0)
        return v_pos.cuda(), t_pos_idx.cuda()

hhhh, that works indeed

rkfg commented 8 months ago

where is the setup.py

In torchmcubes that you cloned locally in the first step.

rkfg commented 8 months ago

I managed to make it work, basically you need to use the same version of CUDA with pytorch because torchmcubes doesn't provide binaries and they have to be built during installation. Errors are silently swallowed so it's not obvious what's wrong. Additionally, CUDA 12.1 doesn't seem to be supported (as I found during manual installation, there's a cryptic error about C++ templates so I didn't even try to dig further) and you need GCC and G++ of version 11 at most.

So, to make it all work you need:

Otherwise torchmcubes silently compiles only for CPU.

merecesarchviz commented 8 months ago

I found a tricky way..

You can turn to mc.mcubes_cpu to export your mesh

Modify the code in models/isosurface.py

    def forward(
        self,
        level: torch.FloatTensor,
    ) -> Tuple[torch.FloatTensor, torch.LongTensor]:
        level = -level.view(self.resolution, self.resolution, self.resolution)
        v_pos, t_pos_idx = self.mc_func(level.detach().cpu(), 0.0)
        v_pos = v_pos[..., [2, 1, 0]]
        v_pos = v_pos / (self.resolution - 1.0)
        return v_pos.cuda(), t_pos_idx.cuda()

Thanks mate this works as well for me to pass trought that error!

zappazack commented 8 months ago

I managed to make it work, basically you need to use the same version of CUDA with pytorch because torchmcubes doesn't provide binaries and they have to be built during installation. Errors are silently swallowed so it's not obvious what's wrong. Additionally, CUDA 12.1 doesn't seem to be supported (as I found during manual installation, there's a cryptic error about C++ templates so I didn't even try to dig further) and you need GCC and G++ of version 11 at most.

So, to make it all work you need:

  • CUDA 11.8 installed locally
  • pytorch 2 with CUDA 11.8 installed in venv/conda
  • GCC/G++ 11

Otherwise torchmcubes silently compiles only for CPU.

Can u give a detailed step by step tutorial with commands?!

rkfg commented 8 months ago

No, it's probably quite different between distros and setting GCC version isn't trivial due to the way it's packaged. If you don't know how to do all that you better use the hack above to run the library on CPU.

Daniel9D commented 8 months ago

@Daniel9D @chrisbward @yuhuangyue Are you using a virtual environment? Which kind of virtual environment are you using (virtualenv / venv)?

-> conda

what @rkfg did has solved for me, using pytorch 2 with CUDA 11.8 on conda and machine

throttlekitty commented 8 months ago

I was just able to build torchmcubes on windows 11 through torch 2.2.1+cu121. I have MSVC2019 installed, and used their x64 Native Tools Command Prompt. Clone torchmcubes somewhere else, then from that dir while in your venv do the following. If you don't get a message about CUDA environment not found, check that 'set CUDA_PATH' is pointing to your local install.

pip uninstall torchmcubes python setup.py install

The problem I ran into was that it kept falling back to CPU so the build failed. My problem was that I didn't have the CUDNN binaries installed, though I thought I had done that a while back. So I simply unpacked the cudnn files into my local cuda install dir.

rkfg commented 8 months ago

I was just able to build torchmcubes on windows 11 through torch 2.2.1+cu121

Indeed, it works with CUDA 12.1. My problem was with GCC, it's a long-standing issue in pybind:

  /mnt/2Tb/conda/triposr/lib/python3.10/site-packages/torch/include/pybind11/detail/../cast.h: In function ‘typename pybind11::detail::type_caster<typename pybind11::detail::intrinsi
c_type<T>::type>::cast_op_type<T> pybind11::detail::cast_op(make_caster<T>&)’:                                                                                                        
  /mnt/2Tb/conda/triposr/lib/python3.10/site-packages/torch/include/pybind11/detail/../cast.h:45:120: error: expected template-name before ‘<’ token                                  
     45 |     return caster.operator typename make_caster<T>::template cast_op_type<T>();  
        |                                                                                                                        ^
  /mnt/2Tb/conda/triposr/lib/python3.10/site-packages/torch/include/pybind11/detail/../cast.h:45:120: error: expected identifier before ‘<’ token
  /mnt/2Tb/conda/triposr/lib/python3.10/site-packages/torch/include/pybind11/detail/../cast.h:45:123: error: expected primary-expression before ‘>’ token                  
     45 |     return caster.operator typename make_caster<T>::template cast_op_type<T>();                                                                                             
        |                                                                                                                           ^
  /mnt/2Tb/conda/triposr/lib/python3.10/site-packages/torch/include/pybind11/detail/../cast.h:45:126: error: expected primary-expression before ‘)’ token                             
     45 |     return caster.operator typename make_caster<T>::template cast_op_type<T>();                                                                                             

This is what happens with GCC/G++ 12. After downgrading to v11 I can build torchmcubes with CUDA 12.1 support. Perhaps this issue is fixed in the newer CUDA 12.2 as they state in the comments, but 12.1 is the current mainstream.

johndpope commented 8 months ago

@rkfg I don't know whose idea at ubuntu it was to change the gcc version just a month back - new ubuntu 24.04 drop in a few weeks.

rkfg commented 8 months ago

@johndpope yes, it's pretty painful because it's not easy to switch between GCC versions by default and every build system has a different way to detect/override it. Worse, no one even cared enough to register these using update-alternatives, I have no idea why. So what I do is manually add the GCC versions I use like this: update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-13 800 --slave /usr/bin/g++ g++ /usr/bin/g++-13 (800 is the priority, higher = more important, gcc-13 and g++-13 obviously should match). I do that for every version I have installed, then do update-alternatives --config gcc and choose the one I need. Both gcc and g++ change to the same version and so far I had no issues with this setup, except when you update them these links might get rewritten.

johndpope commented 8 months ago

Yes- I use timeshift to recover from cuda breaking. It will recover just your system leaving files / docs intact. has saved me countless days - (when they make this change - video driver breaks - cuda breaks etc) it was only when I compared the file changes that I cottened on to this gcc change under the hood. just throw in a spare drive and point the backups to it. https://github.com/linuxmint/timeshift

bennyguo commented 8 months ago

Hi everyone. This problem is due to that torchmcubes is compiled without CUDA support.

To fix this issue, please first make sure that

Then re-install torchmcubes by:

pip uninstall torchmcubes
pip install git+https://github.com/tatsy/torchmcubes.git

I've updated this information in the README. Have fun playing with TripoSR!

flowtyone commented 8 months ago

@bennyguo Please look into https://github.com/VAST-AI-Research/TripoSR/pull/26, it fixes this issue

pookiefoof commented 8 months ago

@bennyguo Please look into #26, it fixes this issue

Thanks! Will check how the scikit's marching cubes performs.

kuynzereb commented 8 months ago

It is also possible to use https://github.com/pmneila/PyMCubes

Simply change

        except AttributeError:
            print("torchmcubes was not compiled with CUDA support, use CPU version instead.")
            v_pos, t_pos_idx = self.mc_func(level.detach().cpu(), 0.0)

to

        except AttributeError:
            print("torchmcubes was not compiled with CUDA support, use CPU version instead.")
            import mcubes
            v_pos, t_pos_idx = mcubes.marching_cubes(level.detach().cpu().numpy(), 0.0)
            v_pos = torch.from_numpy(v_pos.astype(np.float32))
            t_pos_idx = torch.from_numpy(t_pos_idx.astype(np.int64))

in tsr/models/isosurface.py.

pjwaixingren commented 7 months ago

It is also possible to use https://github.com/pmneila/PyMCubes

Simply change

        except AttributeError:
            print("torchmcubes was not compiled with CUDA support, use CPU version instead.")
            v_pos, t_pos_idx = self.mc_func(level.detach().cpu(), 0.0)

to

        except AttributeError:
            print("torchmcubes was not compiled with CUDA support, use CPU version instead.")
            import mcubes
            v_pos, t_pos_idx = mcubes.marching_cubes(level.detach().cpu().numpy(), 0.0)
            v_pos = torch.from_numpy(v_pos.astype(np.float32))
            t_pos_idx = torch.from_numpy(t_pos_idx.astype(np.int64))

in tsr/models/isosurface.py.

Great! it`s working... if you change like this code will get color attribute..: )! //--------------------------

    def forward(
        self,
        level: torch.FloatTensor,
    ) -> Tuple[torch.FloatTensor, torch.LongTensor]:
        level = -level.view(self.resolution, self.resolution, self.resolution)
        print("torchmcubes was not compiled with CUDA support, use CPU version instead.")
        v_pos, t_pos_idx = mcubes.marching_cubes(level.detach().cpu().numpy(), 0.0)
        v_pos = torch.from_numpy(v_pos.astype(np.float32))
        t_pos_idx = torch.from_numpy(t_pos_idx.astype(np.int64))
        v_pos = v_pos / (self.resolution - 1.0)
        return v_pos.to(level.device), t_pos_idx.to(level.device)

//-------------------------- image