facebookresearch / localrf

An algorithm for reconstructing the radiance field of a large-scale scene from a single casually captured video.
MIT License
956 stars 62 forks source link

Optimization failure due to data size #6

Open Yarroudh opened 1 year ago

Yarroudh commented 1 year ago

Hello, I'm trying to apply localrf on my data. I've been able to run the pre-processing commands to estimate the flow and depth maps. However, once I run the optimization script, I get this error:

Traceback (most recent call last):
  File ".\localTensoRF\train.py", line 609, in <module>
    reconstruction(args)
  File ".\localTensoRF\train.py", line 190, in reconstruction
    test_dataset = LocalRFDataset(
  File "C:\Users\Administrateur\Desktop\COLMAP-Reconstruction\NeRF\localrf\localTensoRF\dataLoader\localrf_dataset.py", line 115, in __init__
    self.activate_frames(n_init_frames)
  File "C:\Users\Administrateur\Desktop\COLMAP-Reconstruction\NeRF\localrf\localTensoRF\dataLoader\localrf_dataset.py", line 125, in activate_frames
    self.read_meta()
  File "C:\Users\Administrateur\Desktop\COLMAP-Reconstruction\NeRF\localrf\localTensoRF\dataLoader\localrf_dataset.py", line 240, in read_meta
    self.all_invdepths = np.stack(all_invdepths, 0)
  File "<__array_function__ internals>", line 200, in stack
  File "C:\Users\Administrateur\anaconda3\envs\localrf\lib\site-packages\numpy\core\shape_base.py", line 471, in stack
    return _nx.concatenate(expanded_arrays, axis=axis, out=out,
  File "<__array_function__ internals>", line 200, in concatenate
numpy.core._exceptions.MemoryError: Unable to allocate 3.86 GiB for an array with shape (500, 1080, 1920) and data type float32
Yarroudh commented 1 year ago

I could resolve this error by downsamling my images.

The optimization command looks like this: python localTensoRF\train.py --datadir ../scene --logdir ../scene/log --downsampling 2

ameuleman commented 1 year ago

Hello, The test set seems to be too large for long sequences. I will add an option to change the frequency of test frames and to skip frames in two weeks.

Yarroudh commented 1 year ago

@ameuleman Thanks for your fast reply. I'm wondering if there is any way to reduce the optimization time.

ameuleman commented 1 year ago

Reducing the number of iterations and skipping frames will make the optimization faster. I will create a parameter to speed up all scheduling and learning rates. For skipping frames, this should be done before preprocessing as the optical flow is estimated between consecutive frames.

Yarroudh commented 1 year ago

Thanks for your reply. One last question, after optimization, the output includes the camera poses? If yes, which format? Thanks for your help and the amazing work.

ameuleman commented 1 year ago

Yes, the format should match NeRF synthetic dataset https://drive.google.com/file/d/13VLTNH2oWu-hNSx0USM9klvrI0cd4T0v/view?usp=drivesdk

ameuleman commented 1 year ago

Hi, I added the test_frame_every argument to reduce the frequency of test frames and decrease CPU memory usage. I also added prog_speedup_factor and refinement_speedup_factor to reduce the number of iterations and speed up the optimization at the cost of the quality. Here is an aggressive example:

python localTensoRF/train.py --datadir ${SCENE_DIR} --logdir ${LOG_DIR} --fov ${FOV} --test_frame_every 50 --downsampling 2.0 --prog_speedup_factor 2.0 --refinement_speedup_factor 4.0

Milder parameters could be --downsampling 1.5 --prog_speedup_factor 1.5 --refinement_speedup_factor 2.0. Increasing these parameters will reduce optimization and rendering time at the cost of the reconstruction quality.

In addition, I added the frame_step argument to skip frames which is appropriate if the video is not fast-paced. Here is an example that speeds up the video four times:

python scripts/run_flow.py --data_dir ${SCENE_DIR} --frame_step 4
python localTensoRF/train.py --datadir ${SCENE_DIR} --logdir ${LOG_DIR} --fov ${FOV}
 --frame_step 4