Closed chensh1127 closed 6 months ago
Usually this happens because the path to the data is wrong, that is, the source_path
is not set correctly. I encourage you to check train.sh
, I see data/data/truck
in your error, which should be the cause of the problem.
you are right. I have modified my dataset path, but now I am getting an error: nvrtc: error: invalid value for --gpu-architecture (-arch) my coda version is 11.6 python version is 3.8 torch is 1.12.1 Has anyone encountered this before?
I guess it is caused by the cuda driver, and what GPU do you use to train?
You can try lowering your python version, I'm using 3.7.13.
I also encountered the same problem(nvrtc: error: invalid value for --gpu-architecture (-arch)), how did you solve it
Please let ue know where the error is in the.py file. We currently find some similar problems in 4090.
There are two possible solutions:
you are right. I have modified my dataset path, but now I am getting an error: nvrtc: error: invalid value for --gpu-architecture (-arch)你是对的.我已经修改了我的数据集路径,但现在我得到一个错误:nvrtc:error:invalid value for --gpu-architecture(-arch) my coda version is 11.6 我的版本是11.6 python version is 3.8 Python版本3.8 torch is 1.12.1 火炬是1.12.1 Has anyone encountered this before? 以前有人遇到过吗?
Hi, have you solved this problem?
You can refer to the two solutions I have given, and we have identified the cause of the problem
(octree-gs) I run the code:
bash single_train.sh
the output is: Setting up [LPIPS] perceptual loss: trunk [vgg], v[0.1], spatial [off] /home//miniconda3/envs/octree-gs/lib/python3.8/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead. warnings.warn( /home//miniconda3/envs/octree-gs/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or
training(lp.extract(args), op.extract(args), pp.extract(args), dataset, args.test_iterations, args.save_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from, wandb, logger)
File "train.py", line 89, in training
scene = Scene(dataset, gaussians, ply_path=ply_path, shuffle=False, logger=logger, resolution_scales=dataset.resolution_scales)
File "/home/Octree-GS/scene/init.py", line 51, in init
scene_info = sceneLoadTypeCallbacks["City"](args.source_path, args.random_background, args.white_background, args.eval, args.ds, undistorted=args.undistorted)
File "/home/Octree-GS/scene/dataset_readers.py", line 381, in readCityInfo
json_path = glob.glob(os.path.join(path, f" .json"))[0].split('/')[-1]
IndexError: list index out of range**
None
for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passingweights=VGG16_Weights.IMAGENET1K_V1
. You can also useweights=VGG16_Weights.DEFAULT
to get the most up-to-date weights. warnings.warn(msg) Loading model from: /home//miniconda3/envs/octree-gs/lib/python3.8/site-packages/lpips/weights/v0.1/vgg.pth not found tf board 2024-04-05 13:12:23,960 - INFO: args: Namespace(add_color_dist=False, add_cov_dist=False, add_level=False, add_opacity_dist=False, appearance_dim=0, appearance_lr_delay_mult=0.01, appearance_lr_final=0.0005, appearance_lr_init=0.05, appearance_lr_max_steps=40000, base_layer=-1, checkpoint_iterations=[], coarse_factor=1.5, coarse_iter=5000, compute_cov3D_python=False, data_device='cuda', debug=False, debug_from=-1, densify_grad_threshold=0.0002, detect_anomaly=False, dist2level='round', dist_ratio=0.999, ds=1, eval=True, extend=1.1, extra_ratio=0.5, extra_up=0.01, feat_dim=32, feature_lr=0.0075, fork=2, gpu='-1', images='images', init_level=-1, ip='127.0.0.1', iterations=40000, lambda_dssim=0.2, levels=-1, min_opacity=0.005, mlp_color_lr_delay_mult=0.01, mlp_color_lr_final=5e-05, mlp_color_lr_init=0.008, mlp_color_lr_max_steps=40000, mlp_cov_lr_delay_mult=0.01, mlp_cov_lr_final=0.004, mlp_cov_lr_init=0.004, mlp_cov_lr_max_steps=40000, mlp_featurebank_lr_delay_mult=0.01, mlp_featurebank_lr_final=1e-05, mlp_featurebank_lr_init=0.01, mlp_featurebank_lr_max_steps=40000, mlp_opacity_lr_delay_mult=0.01, mlp_opacity_lr_final=2e-05, mlp_opacity_lr_init=0.002, mlp_opacity_lr_max_steps=40000, model_path='outputs/data/truck/baseline/2024-04-05_13:12:21', n_offsets=10, offset_lr_delay_mult=0.01, offset_lr_final=0.0001, offset_lr_init=0.01, offset_lr_max_steps=40000, opacity_lr=0.02, percent_dense=0.01, port=22315, position_lr_delay_mult=0.01, position_lr_final=0.0, position_lr_init=0.0, position_lr_max_steps=40000, progressive=True, quiet=False, random_background=False, ratio=1, resolution=-1, resolution_scales=[1.0], rotation_lr=0.002, save_iterations=[-1], scaling_lr=0.007, source_path='data/data/truck', start_checkpoint=None, start_stat=500, success_threshold=0.8, test_iterations=[-1], undistorted=False, update_anchor=True, update_from=1500, update_interval=100, update_ratio=0.2, update_until=20000, use_feat_bank=False, use_wandb=False, visible_threshold=0.9, warmup=False, white_background=False) [10000, 20000, 30000, 40000] [10000, 20000, 30000, 40000] Backup Finished! 2024-04-05 13:12:24,129 - INFO: Optimizing outputs/data/truck/baseline/2024-04-05_13:12:21 Output folder: outputs/data/truck/baseline/2024-04-05_13:12:21 [05/04 13:12:24] Tensorboard not available: not logging progress [05/04 13:12:24] Traceback (most recent call last): File "train.py", line 560, inI also tried my custom data and the SfM data sets for Tanks&Temples and Deep Blending are hosted by 3D-Gaussian-Splatting. the error is the same.