facebookresearch / consistent_depth

We estimate dense, flicker-free, geometrically consistent depth from monocular video, for example hand-held cell phone video.
MIT License
1.61k stars 236 forks source link

AssertionError on "Compute per-frame scales" step (frame_names is None) #36

Open aeverless opened 3 years ago

aeverless commented 3 years ago

The Problem

While doing a customized run with python main.py --video_file {...} --path {...} --batch_size 2 --make_video, I'm getting the following error in terminal:

************************************
****  Compute per-frame scales  ****
************************************
Traceback (most recent call last):
  File "main.py", line 13, in <module>
    dp.process(params)
  File "/home/aeverless/Desktop/content/consistent_depth/process.py", line 117, in process
    return self.pipeline(params)
  File "/home/aeverless/Desktop/content/consistent_depth/process.py", line 62, in pipeline
    valid_frames = calibrate_scale(self.video, self.out_dir, frame_range, params)
  File "/home/aeverless/Desktop/content/consistent_depth/scale_calibration.py", line 242, in calibrate_scale
    os.path.dirname(scaled_depth_fmt), ".raw"
  File "/home/aeverless/Desktop/content/consistent_depth/scale_calibration.py", line 142, in check_frames
    assert frame_names is not None
AssertionError

Setup

I'm running Pop!_OS 20.10 with the following setup:

Context

I have followed the steps in Readme and Google Colab, successfully reproducing the result (although with lowered batch size) as shown in the demo — thus I think that the issue lies in Colmap or the way that I have it installed, which I installed by running install_colmap_ubuntu.sh.

Installing it was painful, however, when I tried to reinstall it by a different method — my system seemed to hang forever on the make -j step (the same was for building ceres-solver) and I had to cold reboot (REISUB) my system. At that point I had to scrap this method, and I stuck with the script provided with this repository.

I have also ran the setup/install shells in flownet2, and even recloned it from the flownet2-pytorch repo afterwards, trying to fix this problem I'm having.

Right now I am at the point that Colmap is installed and it seems to work fine (at least it does respond to the -h and gui opens the GUI), but frame_names still seems to be None. I have reviewed some of the code to try and figure out what the problem was in, and I figured out that it checks the frame_names from the given frame_range (which is unspecified in my case) to not be None, and based on the result of this check it then creates several folders in the hierarchical directory and proceeds the work. My current result folder looks like this: colmap_dense color_down color_down_png color_flow color_full depth_mc frames.txt R_hierarchical2_mc

R_hierarchical2_mc contains B0.1_R1.0_PL1-0_LR0.0004_BS2_Oadam folder, inside which is an empty checkpoints directory.

mattbev commented 3 years ago

The Problem

While doing a customized run with python main.py --video_file {...} --path {...} --batch_size 2 --make_video, I'm getting the following error in terminal:

************************************
****  Compute per-frame scales  ****
************************************
Traceback (most recent call last):
  File "main.py", line 13, in <module>
    dp.process(params)
  File "/home/aeverless/Desktop/content/consistent_depth/process.py", line 117, in process
    return self.pipeline(params)
  File "/home/aeverless/Desktop/content/consistent_depth/process.py", line 62, in pipeline
    valid_frames = calibrate_scale(self.video, self.out_dir, frame_range, params)
  File "/home/aeverless/Desktop/content/consistent_depth/scale_calibration.py", line 242, in calibrate_scale
    os.path.dirname(scaled_depth_fmt), ".raw"
  File "/home/aeverless/Desktop/content/consistent_depth/scale_calibration.py", line 142, in check_frames
    assert frame_names is not None
AssertionError

Setup

I'm running Pop!_OS 20.10 with the following setup:

  • CPU: i5-8400
  • GPU: GTX 1660 Super
  • RAM: 16GB DDR4
  • CUDA: 11.2.0
  • Graphic Driver: Nvidia 460.32.03

Context

I have followed the steps in Readme and Google Colab, successfully reproducing the result (although with lowered batch size) as shown in the demo — thus I think that the issue lies in Colmap or the way that I have it installed, which I installed by running install_colmap_ubuntu.sh.

Installing it was painful, however, when I tried to reinstall it by a different method — my system seemed to hang forever on the make -j step (the same was for building ceres-solver) and I had to cold reboot (REISUB) my system. At that point I had to scrap this method, and I stuck with the script provided with this repository.

I have also ran the setup/install shells in flownet2, and even recloned it from the flownet2-pytorch repo afterwards, trying to fix this problem I'm having.

Right now I am at the point that Colmap is installed and it seems to work fine (at least it does respond to the -h and gui opens the GUI), but frame_names still seems to be None. I have reviewed some of the code to try and figure out what the problem was in, and I figured out that it checks the frame_names from the given frame_range (which is unspecified in my case) to not be None, and based on the result of this check it then creates several folders in the hierarchical directory and proceeds the work. My current result folder looks like this: colmap_dense color_down color_down_png color_flow color_full depth_mc frames.txt R_hierarchical2_mc

R_hierarchical2_mc contains B0.1_R1.0_PL1-0_LR0.0004_BS2_Oadam folder, inside which is an empty checkpoints directory.

I'm having this same issue @aeverless, did you find a solution?

aeverless commented 3 years ago

@mattbev no I didn't. I still believe that the issues lies in Colmap or the way I installed it (because the demo was successfully done) and I'll try something else but as for now I'm hoping that the devs will answer. If I do lay my hands on the solution I'll of course post it here

Edit: clarifications

aeverless commented 3 years ago

@mattbev what are you running though? It might be apt to specify your setup, since it is conceivable that we just happen to be unlucky with our hardware or software choice

mattbev commented 3 years ago

@aeverless I left an issue here that describes my specific setup and error. Thanks for the update!

mattbev commented 3 years ago

@aeverless I found a workaround to this issue... for me I believe the error is a result of a colmap gpu threading issue. So my workaround is explicitly disallowing gpu threading.

if you edit the dense() function in tools/colmap_processor.py and insert:

 cmd.extend(['--PatchMatchStereo.gpu_index', '0'])

before the final run(cmd), it resolves my issue. this addition adds the argument to colmap's patch_match_stereo operation

Hope this helps!

Edit: however, I now get a seg fault during the later Fine-Tuning step but I assume this is unrelated.

mattbev commented 3 years ago

@aeverless I found a workaround to this issue... for me I believe the error is a result of a colmap gpu threading issue. So my workaround is explicitly disallowing gpu threading.

if you edit the dense() function in tools/colmap_processor.py and insert:

 cmd.extend(['--PatchMatchStereo.gpu_index', '0'])

before the final run(cmd), it resolves my issue. this addition adds the argument to colmap's patch_match_stereo operation

Hope this helps!

Edit: however, I now get a seg fault during the later Fine-Tuning step but I assume this is unrelated.

The later error in the Fine-Tuning step is an issue with the Mannequin Challenge (mc) model. If I specify --model_type "monodepth2" I don't encounter this issue

aeverless commented 3 years ago

@aeverless I found a workaround to this issue... for me I believe the error is a result of a colmap gpu threading issue. So my workaround is explicitly disallowing gpu threading. if you edit the dense() function in tools/colmap_processor.py and insert:

 cmd.extend(['--PatchMatchStereo.gpu_index', '0'])

before the final run(cmd), it resolves my issue. this addition adds the argument to colmap's patch_match_stereo operation Hope this helps! Edit: however, I now get a seg fault during the later Fine-Tuning step but I assume this is unrelated.

The later error in the Fine-Tuning step is an issue with the Mannequin Challenge (mc) model. If I specify --model_type "monodepth2" I don't encounter this issue

I'm glad that you've found the solution for your problem, but after trying it out I found that it doesn't work for me — the error persists as specified in the issue

BaiZS commented 2 years ago

Hello, I would like to ask whether this problem has been solved. I have encountered the same problem. "Cmd.extend (['-- PatchmatchSTEReo.gpu_index ', '0'])" does not work.