pengHTYX / Era3D

GNU Affero General Public License v3.0
535 stars 24 forks source link

Instant-NSR Mesh Extraction 失败,我是windows平台 #27

Open tomyu168 opened 4 months ago

tomyu168 commented 4 months ago

bin D:\anaconda3\Lib\site-packages\bitsandbytes\libbitsandbytes_cuda121.dll bin D:\anaconda3\Lib\site-packages\bitsandbytes\libbitsandbytes_cuda121.dll bin D:\anaconda3\Lib\site-packages\bitsandbytes\libbitsandbytes_cuda121.dll Update finite_difference_eps to 0.027204705103003882 Traceback (most recent call last): File "F:\Era3D\instant-nsr-pl\launch.py", line 134, in main() File "F:\Era3D\instant-nsr-pl\launch.py", line 114, in main trainer.fit(system, datamodule=dm) File "D:\anaconda3\Lib\site-packages\pytorch_lightning\trainer\trainer.py", line 608, in fit call._call_and_handle_interrupt( File "D:\anaconda3\Lib\site-packages\pytorch_lightning\trainer\call.py", line 38, in _call_and_handle_interrupt return trainer_fn(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda3\Lib\site-packages\pytorch_lightning\trainer\trainer.py", line 650, in _fit_impl self._run(model, ckpt_path=self.ckpt_path) File "D:\anaconda3\Lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1103, in _run results = self._run_stage() ^^^^^^^^^^^^^^^^^ File "D:\anaconda3\Lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1182, in _run_stage self._run_train() File "D:\anaconda3\Lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1205, in _run_train self.fit_loop.run() File "D:\anaconda3\Lib\site-packages\pytorch_lightning\loops\loop.py", line 199, in run self.advance(*args, *kwargs) File "D:\anaconda3\Lib\site-packages\pytorch_lightning\loops\fit_loop.py", line 267, in advance self._outputs = self.epoch_loop.run(self._data_fetcher) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda3\Lib\site-packages\pytorch_lightning\loops\loop.py", line 199, in run self.advance(args, kwargs) File "D:\anaconda3\Lib\site-packages\pytorch_lightning\loops\epoch\training_epoch_loop.py", line 213, in advance batch_output = self.batch_loop.run(kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda3\Lib\site-packages\pytorch_lightning\loops\loop.py", line 199, in run self.advance(*args, kwargs) File "D:\anaconda3\Lib\site-packages\pytorch_lightning\loops\batch\training_batch_loop.py", line 88, in advance outputs = self.optimizer_loop.run(optimizers, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda3\Lib\site-packages\pytorch_lightning\loops\loop.py", line 199, in run self.advance(*args, *kwargs) File "D:\anaconda3\Lib\site-packages\pytorch_lightning\loops\optimization\optimizer_loop.py", line 202, in advance result = self._run_optimization(kwargs, self._optimizers[self.optim_progress.optimizer_position]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda3\Lib\site-packages\pytorch_lightning\loops\optimization\optimizer_loop.py", line 249, in _run_optimization self._optimizer_step(optimizer, opt_idx, kwargs.get("batch_idx", 0), closure) File "D:\anaconda3\Lib\site-packages\pytorch_lightning\loops\optimization\optimizer_loop.py", line 370, in _optimizer_step self.trainer._call_lightning_module_hook( File "D:\anaconda3\Lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1347, in _call_lightning_module_hook output = fn(args, kwargs) ^^^^^^^^^^^^^^^^^^^ File "D:\anaconda3\Lib\site-packages\pytorch_lightning\core\module.py", line 1744, in optimizer_step optimizer.step(closure=optimizer_closure) File "D:\anaconda3\Lib\site-packages\pytorch_lightning\core\optimizer.py", line 169, in step step_output = self._strategy.optimizer_step(self._optimizer, self._optimizer_idx, closure, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda3\Lib\site-packages\pytorch_lightning\strategies\strategy.py", line 234, in optimizer_step return self.precision_plugin.optimizer_step( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda3\Lib\site-packages\pytorch_lightning\plugins\precision\native_amp.py", line 75, in optimizer_step closure_result = closure() ^^^^^^^^^ File "D:\anaconda3\Lib\site-packages\pytorch_lightning\loops\optimization\optimizer_loop.py", line 149, in call self._result = self.closure(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda3\Lib\site-packages\pytorch_lightning\loops\optimization\optimizer_loop.py", line 135, in closure step_output = self._step_fn() ^^^^^^^^^^^^^^^ File "D:\anaconda3\Lib\site-packages\pytorch_lightning\loops\optimization\optimizer_loop.py", line 419, in _training_step training_step_output = self.trainer._call_strategy_hook("training_step", kwargs.values()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda3\Lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1485, in _call_strategy_hook output = fn(args, kwargs) ^^^^^^^^^^^^^^^^^^^ File "D:\anaconda3\Lib\site-packages\pytorch_lightning\strategies\dp.py", line 134, in training_step return self.model(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda3\Lib\site-packages\torch\nn\parallel\data_parallel.py", line 183, in forward return self.module(inputs[0], module_kwargs[0]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda3\Lib\site-packages\pytorch_lightning\overrides\data_parallel.py", line 77, in forward output = super().forward(inputs, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\anaconda3\Lib\site-packages\pytorch_lightning\overrides\base.py", line 98, in forward output = self._forward_module.training_step(*inputs, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\Era3D\instant-nsr-pl\systems\neus_ortho.py", line 166, in training_step train_num_rays = int(self.train_num_rays (self.train_num_samples / out['num_samples_full'].sum().item()))


ZeroDivisionError: division by zero
Epoch 0: : 0it [07:08, ?it/s]
[W CudaIPCTypes.cpp:15] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]

我看了下 https://github.com/xxlong0/Wonder3D/issues/47 在wonder3d里也有相关问题,以及Instant-NSR项目下也有,不知道本项目有没有注意到Instant-NSR有个fix data win版本的分支。希望能够解决,感谢

![image](https://github.com/pengHTYX/Era3D/assets/35031186/326c391b-a88b-490f-ab67-cae265e0e4e8)
Jiabit55 commented 2 months ago

我也是这个问题,请问您有解决吗

Jiabit55 commented 2 months ago

你也是在做这个项目吗https://github.com/pengHTYX/Era3D

tomyu168 commented 2 months ago

你也是在做这个项目吗https://github.com/pengHTYX/Era3D

我就图一乐,不是专业人士,尝试了wonder3d triposr era3d 还有几个忘记名字了,目前用unique3d

Jiabit55 commented 2 months ago

你也是在做这个项目吗 https://github.com/pengHTYX/Era3D

我就图一乐,不是专业人士,尝试了wonder3d triposr era3d 还有几个忘记名字了,目前用unique3d

是使用Windows系统运行的吗,我按照Instant-NSR进行更改代码后,还是有这个错误

tomyu168 commented 2 months ago

你也是在做这个项目吗 https://github.com/pengHTYX/Era3D

我就图一乐,不是专业人士,尝试了wonder3d triposr era3d 还有几个忘记名字了,目前用unique3d

是使用Windows系统运行的吗,我按照Instant-NSR进行更改代码后,还是有这个错误

是的