Closed longtermgoals closed 6 months ago
Hi, you may have a try of the following parameters:
--deform_type node --node_num 4096 --deform_lr_scale 0.1 --eval --resolution 1 --warm_up 3000 --iterations_node_sampling 10000 --iterations_node_rendering 12000 --oneupSHdegree_step 1000 --progressive_train_node --progressive_stage_ratio 0.1 --progressive_stage_steps 1000 --node_warm_up 1000 --node_enable_densify_prune --no_arap_loss --white_background --gt_alpha_mask_as_dynamic_mask --gs_with_motion_mask --pred_color --dynamic_color_warm_up 20000 --node_densification_interval 3000
Hope this can help to solve your problem. :)
Hello,
Based on your description, it seems that there may be some issues with the current code, which may be over-adjusted on DNeRF datasets but could potentially degrade on NeRF-DS datasets. If you're in a rush, you can try using our older version, which can be found at this link (extraction code: ik8r). The command to run the program is located in the train_gui.sh file. Please note that the code may be messy, so please bear with it and I'm sorry for that.
I apologize for not being able to thoroughly compare the two versions at the moment as I am currently occupied with another project. In the near future, I won't have sufficient time to carefully examine the code. However, I hope that the provided older version will help resolve your issue.
Got it!
I was running experiments with the new settings, but it seems to be not working and PSNR stucks at 6000 iterations at around PSNR=16.3. I was worried that I did something wrong. Now it seems to be clear.
Thanks again for the great work and don't worry about it.
Hi, I just double-checked the code this morning and found that the following command on the old version code works normally on NeRF-DS datasets:
--deform_type node --node_num 4096 --deform_lr_scale 1 --eval --resolution 1 --gt_alpha_mask_as_dynamic_mask --node_warm_up 500 --iterations_node_sampling 7500 --iterations_node_rendering 10000 --oneupSHdegree_step 5000 --warm_up 3000 --hyper_dim 8 --local_frame --pred_color --pred_opacity --dynamic_color_warm_up 20000 --deform_downsamp_strategy direct
The PSNR achieves 25.2 around 25000 iterations and is higher in the table in the paper on NeRF-DS (Bell).
Hope this information helps.
Hi, I just double-checked the code this morning and found that the following command on the old version code works normally on NeRF-DS datasets:
--deform_type node --node_num 4096 --deform_lr_scale 1 --eval --resolution 1 --gt_alpha_mask_as_dynamic_mask --node_warm_up 500 --iterations_node_sampling 7500 --iterations_node_rendering 10000 --oneupSHdegree_step 5000 --warm_up 3000 --hyper_dim 8 --local_frame --pred_color --pred_opacity --dynamic_color_warm_up 20000 --deform_downsamp_strategy direct
The PSNR achieves 25.2 around 25000 iterations and is higher in the table in the paper on NeRF-DS (Bell).
Hope this information helps.
Cool, thanks! This makes much more sense now!
您好,請問可以問一下您的舊版程式碼如何Environmental Setups嗎? 因為不管我用新版程式碼的environment或Deformable-3D-Gaussians那篇論文的environment,執行train.py都會一直出現很多環境error。
Hi,
You can try pip install -r requirements.txt
.
By the way, train.py is not the script to run. Please follow the readme file and use train_gui.py instead. : )
First of all, thanks for the brilliant work and sharing with all of us. I have a question regarding to reproducing nerf-ds dataset results. Since no traiing configurations are specified for this dataset, therefore I used a copy of training script from dnerf dataset, which was:
CUDA_VISIBLE_DEVICES=0 python train_gui.py --source_path YOUR/PATH/TO/DATASET/--model_path outputs/cup_novel_view --deform_type node --node_num 512 --hyper_dim 8 --eval --gt_alpha_mask_as_scene_mask --local_frame --resolution 1 --W 800 --H 800
The difference was removed --is_blender arguments and change --resolution to 1 in order to avoid extreme scaling. After 2 hours training(for samples as: cup, basin, as), It seems best PSNR remains around 16-17, and the result was almost achieved during the first couple thousands epochs and never gets better since.
I also tried --node_num set to 1024, since it was once mentioned for real-world data, the number of control points should be larger. But results remain the same.
So I wonder whether we used the wrong script to reproduce the results. Could you please show the parameter settings for nerf-ds dataset to reproduce the results shown in the paper?
Thank you so much in advance!