nerfstudio-project / nerfstudio

A collaboration friendly studio for NeRFs
https://docs.nerf.studio
Apache License 2.0
9.16k stars 1.22k forks source link

Out of memory for 360 videos #2006

Closed jrubiohervas closed 1 year ago

jrubiohervas commented 1 year ago

I am trying to generate 3D reconstructions of Insta360 videos. Typical duration is 5 minutes. I first use the command:

ns-process-data video --camera-type equirectangular --images-per-equirect 8 --num-frames-target 1000 --crop factor 0 0.2 0 0 --data data/360--output-dir outputs/360

Then I try to train it using:

ns-train nerfacto --data outputs/360

However, it always returns out of memory issues. I am running it on Windows with a processor Intel(R) Core(TM) i5-10500H CPU @ 2.50GHz and 16.0 GB RAM.

The usual message I get is:

(nerfstudio) C:\Users\User>ns-train nerfacto --data outputs/360 [09:46:19] Using --data alias for --data.pipeline.datamanager.data train.py:235 ──────────────────────────────────────────────────────── Config ──────────────────────────────────────────────────────── TrainerConfig( _target=<class 'nerfstudio.engine.trainer.Trainer'>, output_dir=WindowsPath('outputs'), method_name='nerfacto', experiment_name=None, project_name='nerfstudio-project', timestamp='2023-05-26_094619', machine=MachineConfig(seed=42, num_gpus=1, num_machines=1, machine_rank=0, dist_url='auto'), logging=LoggingConfig( relative_log_dir=WindowsPath('.'), steps_per_log=10, max_buffer_size=20, local_writer=LocalWriterConfig( _target=<class 'nerfstudio.utils.writer.LocalWriter'>, enable=True, stats_to_track=( <EventName.ITER_TRAIN_TIME: 'Train Iter (time)'>, <EventName.TRAIN_RAYS_PER_SEC: 'Train Rays / Sec'>, <EventName.CURR_TEST_PSNR: 'Test PSNR'>, <EventName.VIS_RAYS_PER_SEC: 'Vis Rays / Sec'>, <EventName.TEST_RAYS_PER_SEC: 'Test Rays / Sec'>, <EventName.ETA: 'ETA (time)'> ), max_log_size=10 ), profiler='basic' ), viewer=ViewerConfig( relative_log_filename='viewer_log_filename.txt', websocket_port=None, websocket_port_default=7007, websocket_host='0.0.0.0', num_rays_per_chunk=32768, max_num_display_images=512, quit_on_train_completion=False, image_format='jpeg', jpeg_quality=90 ), pipeline=VanillaPipelineConfig( _target=<class 'nerfstudio.pipelines.base_pipeline.VanillaPipeline'>, datamanager=VanillaDataManagerConfig( _target=<class 'nerfstudio.data.datamanagers.base_datamanager.VanillaDataManager'>, data=WindowsPath('outputs/360'), camera_optimizer=CameraOptimizerConfig( _target=<class 'nerfstudio.cameras.camera_optimizers.CameraOptimizer'>, mode='SO3xR3', position_noise_std=0.0, orientation_noise_std=0.0, optimizer=AdamOptimizerConfig( _target=<class 'torch.optim.adam.Adam'>, lr=0.0006, eps=1e-08, max_norm=None, weight_decay=0.01 ), scheduler=ExponentialDecaySchedulerConfig( _target=<class 'nerfstudio.engine.schedulers.ExponentialDecayScheduler'>, lr_pre_warmup=1e-08, lr_final=6e-06, warmup_steps=0, max_steps=200000, ramp='cosine' ), param_group='camera_opt' ), dataparser=NerfstudioDataParserConfig( _target=<class 'nerfstudio.data.dataparsers.nerfstudio_dataparser.Nerfstudio'>, data=WindowsPath('.'), scale_factor=1.0, downscale_factor=None, scene_scale=1.0, orientation_method='up', center_method='poses', auto_scale_poses=True, train_split_fraction=0.9, depth_unit_scale_factor=0.001 ), train_num_rays_per_batch=4096, train_num_images_to_sample_from=-1, train_num_times_to_repeat_images=-1, eval_num_rays_per_batch=4096, eval_num_images_to_sample_from=-1, eval_num_times_to_repeat_images=-1, eval_image_indices=(0,), collate_fn=<function nerfstudio_collate at 0x000002A1A7087EE0>, camera_res_scale_factor=1.0, patch_size=1 ), model=NerfactoModelConfig( _target=<class 'nerfstudio.models.nerfacto.NerfactoModel'>, enable_collider=True, collider_params={'near_plane': 2.0, 'far_plane': 6.0}, loss_coefficients={'rgb_loss_coarse': 1.0, 'rgb_loss_fine': 1.0}, eval_num_rays_per_chunk=32768, near_plane=0.05, far_plane=1000.0, background_color='last_sample', hidden_dim=64, hidden_dim_color=64, hidden_dim_transient=64, num_levels=16, max_res=2048, log2_hashmap_size=19, num_proposal_samples_per_ray=(256, 96), num_nerf_samples_per_ray=48, proposal_update_every=5, proposal_warmup=5000, num_proposal_iterations=2, use_same_proposal_network=False, proposal_net_args_list=[ {'hidden_dim': 16, 'log2_hashmap_size': 17, 'num_levels': 5, 'max_res': 128, 'use_linear': False}, {'hidden_dim': 16, 'log2_hashmap_size': 17, 'num_levels': 5, 'max_res': 256, 'use_linear': False} ], proposal_initial_sampler='piecewise', interlevel_loss_mult=1.0, distortion_loss_mult=0.002, orientation_loss_mult=0.0001, pred_normal_loss_mult=0.001, use_proposal_weight_anneal=True, use_average_appearance_embedding=True, proposal_weights_anneal_slope=10.0, proposal_weights_anneal_max_num_iters=1000, use_single_jitter=True, predict_normals=False, disable_scene_contraction=False, use_gradient_scaling=False ) ), optimizers={ 'proposal_networks': { 'optimizer': AdamOptimizerConfig( _target=<class 'torch.optim.adam.Adam'>, lr=0.01, eps=1e-15, max_norm=None, weight_decay=0 ), 'scheduler': ExponentialDecaySchedulerConfig( _target=<class 'nerfstudio.engine.schedulers.ExponentialDecayScheduler'>, lr_pre_warmup=1e-08, lr_final=0.0001, warmup_steps=0, max_steps=200000, ramp='cosine' ) }, 'fields': { 'optimizer': AdamOptimizerConfig( _target=<class 'torch.optim.adam.Adam'>, lr=0.01, eps=1e-15, max_norm=None, weight_decay=0 ), 'scheduler': ExponentialDecaySchedulerConfig( _target=<class 'nerfstudio.engine.schedulers.ExponentialDecayScheduler'>, lr_pre_warmup=1e-08, lr_final=0.0001, warmup_steps=0, max_steps=200000, ramp='cosine' ) } }, vis='viewer', data=WindowsPath('outputs/360'), relative_model_dir=WindowsPath('nerfstudio_models'), steps_per_save=2000, steps_per_eval_batch=500, steps_per_eval_image=500, steps_per_eval_all_images=25000, max_num_iterations=30000, mixed_precision=True, use_grad_scaler=False, save_only_latest_checkpoint=True, load_dir=None, load_step=None, load_config=None, load_checkpoint=None, log_gradients=False ) ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── Saving config to: outputs\360\nerfacto\2023-05-26_094619\config.yml experiment_config.py:128 Saving checkpoints to: outputs\360\nerfacto\2023-05-26_094619\nerfstudio_models trainer.py:136 Auto image downscale factor of 1 nerfstudio_dataparser.py:336 [09:46:20] Skipping 0 files in dataset split train. nerfstudio_dataparser.py:163 Skipping 0 files in dataset split val. nerfstudio_dataparser.py:163 Setting up training dataset... Caching all 3589 images. Warning: If you run out of memory, try reducing the number of images to sample from. Exception in worker Traceback (most recent call last): File "C:\Users\User\anaconda3\envs\nerfstudio\lib\concurrent\futures\thread.py", line 78, in _worker MemoryError Traceback (most recent call last): File "C:\Users\User\anaconda3\envs\nerfstudio\lib\runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\User\anaconda3\envs\nerfstudio\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "C:\Users\User\anaconda3\envs\nerfstudio\Scripts\ns-train.exe__main.py", line 7, in File "C:\Users\User\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\scripts\train.py", line 260, in entrypoint main( File "C:\Users\User\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\scripts\train.py", line 246, in main launch( File "C:\Users\User\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\scripts\train.py", line 185, in launch main_func(local_rank=0, world_size=world_size, config=config) File "C:\Users\User\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\scripts\train.py", line 99, in train_loop trainer.setup() File "C:\Users\User\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\engine\trainer.py", line 149, in setup self.pipeline = self.config.pipeline.setup( File "C:\Users\User\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\configs\base_config.py", line 57, in setup return self._target(self, **kwargs) File "C:\Users\User\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\pipelines\base_pipeline.py", line 242, in init self.datamanager: VanillaDataManager = config.datamanager.setup( File "C:\Users\User\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\configs\base_config.py", line 57, in setup return self._target(self, **kwargs) File "C:\Users\User\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\data\datamanagers\base_datamanager.py", line 398, in init super().init() File "C:\Users\User\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\data\datamanagers\base_datamanager.py", line 175, in init self.setup_train() File "C:\Users\User\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\data\datamanagers\base_datamanager.py", line 439, in setup_train self.train_image_dataloader = CacheDataloader( File "C:\Users\User\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\data\utils\dataloaders.py", line 81, in init self.cached_collated_batch = self._get_collated_batch() File "C:\Users\User\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\data\utils\dataloaders.py", line 119, in _get_collated_batch batch_list = self._get_batch_list() File "C:\Users\User\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\data\utils\dataloaders.py", line 113, in _get_batch_list batch_list.append(res.result()) File "C:\Users\User\anaconda3\envs\nerfstudio\lib\concurrent\futures_base.py", line 437, in result return self.get_result() File "C:\Users\User\anaconda3\envs\nerfstudio\lib\concurrent\futures_base.py", line 389, in __get_result raise self._exception File "C:\Users\User\anaconda3\envs\nerfstudio\lib\concurrent\futures\thread.py", line 57, in run result = self.fn(*self.args, **self.kwargs) File "C:\Users\User\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\data\datasets\base_dataset.py", line 121, in getitem data = self.get_data(image_idx) File "C:\Users\User\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\data\datasets\base_dataset.py", line 99, in get_data image = self.get_image(image_idx) File "C:\Users\User\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\data\datasets\base_dataset.py", line 85, in get_image image = torch.from_numpy(self.get_numpy_image(image_idx).astype("float32") / 255.0) File "C:\Users\User\anaconda3\envs\nerfstudio\lib\site-packages\nerfstudio\data\datasets\base_dataset.py", line 71, in get_numpy_image image = np.array(pil_image, dtype="uint8") # shape is (h, w) or (h, w, 3 or 4) numpy.core._exceptions.MemoryError: Unable to allocate 6.20 MiB for an array with shape (1472, 1472, 3) and data type uint8

Any idea how to run 360 videos smoothly without all these memory issues?

f-dy commented 1 year ago

Auto image downscale factor of 1 nerfstudio_dataparser.py:336

try using --downscale-factor 2 or even --downscale-factor 4

jrubiohervas commented 1 year ago

Thanks @f-dy. Where to downscale though? Trying it with ns-train or ns-process-data returns unrecognized arguments.

jrubiohervas commented 1 year ago

I actually solved the issue by resizing the input video before any processing.