Closed jorgepradoh closed 1 year ago
@jorgepradoh I'm running into this issue a lot too, not sure what its caused by -
Edit 1: It looks like it initializes a set of vectors storing the state of the trajectory and etc. Maybe specifying a longer buffer will suffice? I'll respond in this comment with the answer.
Edit 2: Yes thats the solution, i'll comment back here if somewhere down the line this turns out not to be the solution.
Edit 3: It is the solution, the only draw back is during the global bundle adjustment if you are using a larger buffer it'll require a GPU with more memory. I've been using NVIDIA A10's (24GB) and i've been comfortably processing videos with 13K frames with a buffer of 1100.
tldr, the solution is to specify e.g. --buffer 1024
@jorgepradoh I'm running into this issue a lot too, not sure what its caused by -
Edit 1: It looks like it initializes a set of vectors storing the state of the trajectory and etc. Maybe specifying a longer buffer will suffice? I'll respond in this comment with the answer.
Edit 2: Yes thats the solution, i'll comment back here if somewhere down the line this turns out not to be the solution.
Edit 3: It is the solution, the only draw back is during the global bundle adjustment if you are using a larger buffer it'll require a GPU with more memory. I've been using NVIDIA A10's (24GB) and i've been comfortably processing videos with 13K frames with a buffer of 1100.
thanks for your suggestion. My dataset has 4541 frames, and I set buffer to 1024. The "out of bounds" error shows up again. Does this mean that buffer size should be larger than number of frames?
Hello, I am trying to run DROID-SLAM on a custom dataset, however after a certain number of iterations (between 2-3k), i get the error
I am running this through a bash script on a server with the following parser arguments
The dataset is a sequence of 688x512 RGB images, I am using the weights file provided in the google drive download link from the demo section. The GPU being used is an NVIDIA Tesla V100.
Previous runs of inference on the example demos execute without problem. Does this mean the algorithm cannot completely perform slam on this dataset? Or is it something in the code that I should modify?
The respective output log is attached, thanks in advance.
output_d_mapping_341.log