Closed Deng-King closed 11 months ago
Hi @Deng-King, Thank you very much for providing the corrections to the config and readme files. Much appreciated! I have updated them now.
Regarding your question, I will look into it. It sounds indeed very strange that the batch_rays_o
, batch_rays_d
, batch_gt_depth
, batch_gt_color
in Mapper.py
are set to zero, which should not happen.
Just as a sanity check: what version of the faiss library are you using?
Hi @Deng-King, Thank you very much for providing the corrections to the config and readme files. Much appreciated! I have updated them now.
Regarding your question, I will look into it. It sounds indeed very strange that the
batch_rays_o
,batch_rays_d
,batch_gt_depth
,batch_gt_color
inMapper.py
are set to zero, which should not happen.Just as a sanity check: what version of the faiss library are you using?
The version of faiss is 1.7.2, which is the same as the version specified in the env.yaml file.
...
exceptiongroup 1.1.3 pypi_0 pypi
executing 2.0.0 pypi_0 pypi
faiss-gpu 1.7.2 pypi_0 pypi
fastjsonschema 2.16.2 pypi_0 pypi
ffmpeg 4.3 hf484d3e_0 pytorch
...
I can not replicate this behavior. I would therefore hypothesize that your problem is either related to the environment or the hardware you are running it on - for example, we never tried running the code using WSL 2.0. If you have access to a computer running Linux, then I would try that. Best of luck with it and let me know if you have any other questions on this.
I’ll try it on a Linux server later. It would be better if you could provide a Docker image of the project, so that I can rule out any problems stemming from the code environment. For the time being, my virtual environment seems to be the same as env.yaml, and the primary differences are the OS and hardware.
If I would take a guess, I would say that your environment is not the issue (as multiple people have already installed the environment and got it working on linux without a problem). I would therefore not spend the time right now to make a docker image before you have tried the pipeline on a linux machine. Hope that is ok for you.
Hi @eriksandstroem,
Thank you for your nice work! However, I noticed a few typos in the dataset's YAML files and the README script.
When I ran the command
python run.py configs/Replica/room0.yaml
, I received a errorFileNotFoundError: [Errno 2] No such file or directory: 'Datasets/Replica/room0/traj.txt'
. In the project directory, it should be '.../datasets/...', which can be corrected by replacing all instances of _'inputfolder: Datasets/...' with _'inputfolder: datasets/...' in the YAML files under the './configs/Replica/' directory.Additionally, in the README script, it says using
conda env create -f environment.yaml
&conda activate point-slam-env
to create the virtual environment for this project. However, according to the file ’env.yaml‘, the correct commands should beconda env create -f env.yaml
&conda activate point-slam
.I hope this may helps!
After correcting the typos above, I have re-ran the command
python run.py configs/Replica/room0.yaml
and encountered the following error:After extensive debugging, it was found that
batch_rays_o
,batch_rays_d
,batch_gt_depth
,batch_gt_color
in 'Mapper.py' are set to zero while maintaining their original shapes after being passed to the function_ = self.npc.add_neural_points(batch_rays_o, batch_rays_d, batch_gt_depth, batch_gt_color, dynamic_radius=self.dynamic_r_add[j, i] if self.use_dynamic_radius else None)
(line 317 in 'Mapper.py')For instance, I printed them on the screen by making some code modifications in ‘Mapper.py’:
and in _'neuralpoint.py':
We can find that the process will be terminated at
self.index.train(torch.tensor(self._cloud_pos, device=self.device))
with the following screen output:I have spent 8 hours on this and I have no idea why this is happening (neither does New Bing & ChatGPT). :(
Thank you for taking the time to check this issue. I deeply appreciate any help you can provide.
(The code is running on WSL 2.0-Ubuntu 22.04.2 LTS with RTX 3060 Ti, BTW)