I'm trying to optimize NeRF with dense depth priors using the depth completion network trained on ScanNet provided in the corresponding readme section and I get the following error:
Traceback (most recent call last):
File "run_nerf.py", line 1104, in <module>
run_nerf()
File "run_nerf.py", line 1070, in run_nerf
train_nerf(images, depths, valid_depths, poses, intrinsics, i_split, args, scene_sample_params, lpips_alex, gt_depths, gt_valid_depths)
File "run_nerf.py", line 789, in train_nerf
scene_sample_params, args)
File "run_nerf.py", line 728, in complete_and_check_depth
invalidate_large_std_threshold=args.invalidate_large_std_threshold)
File "run_nerf.py", line 678, in complete_depth
ckpt = torch.load(model_path)
File "/miniconda/envs/env/lib/python3.7/site-packages/torch/serialization.py", line 713, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/miniconda/envs/env/lib/python3.7/site-packages/torch/serialization.py", line 920, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: A load persistent id instruction was encountered,
but no persistent_load function was specified.
I've tried the following pytorch versions: 1.11.0, 1.10.0, 1.9.1, and 1.9.0.
Could you please confirm that this is the correct link for the depth completion network weights?
Hello,
I'm trying to optimize NeRF with dense depth priors using the depth completion network trained on ScanNet provided in the corresponding readme section and I get the following error:
I've tried the following pytorch versions: 1.11.0, 1.10.0, 1.9.1, and 1.9.0. Could you please confirm that this is the correct link for the depth completion network weights?
Thank you in advance for your help.