szymanowiczs / splatter-image

Official implementation of `Splatter Image: Ultra-Fast Single-View 3D Reconstruction' CVPR 2024
https://szymanowiczs.github.io/splatter-image
BSD 3-Clause "New" or "Revised" License
795 stars 54 forks source link

Key 'anneal_opacity' is not in struct,How to solve this ?? #6

Closed linmi1 closed 7 months ago

linmi1 commented 8 months ago

Error executing job with overrides: ['+dataset=[cars]'] Traceback (most recent call last): File "train_network.py", line 57, in main gaussian_predictor = GaussianSplatPredictor(cfg) File "/home/stu/linmi/splatter-image-main/scene/gaussian_predictor.py", line 513, in init split_dimensions, scale_inits, bias_inits = self.get_splits_and_inits(True, cfg) File "/home/stu/linmi/splatter-image-main/scene/gaussian_predictor.py", line 618, in get_splits_and_inits if cfg.model.anneal_opacity: omegaconf.errors.ConfigAttributeError: Key 'anneal_opacity' is not in struct full_key: model.anneal_opacity object_type=dict

linmi1 commented 7 months ago

I check the config there is no a variable named anneal_opacity in cfg.model.

Xinrui-Z commented 7 months ago

I check the config there is no a variable named anneal_opacity in cfg.model.

Hi, have you solved this problem? I tried to modify the default_config.yaml file, but a new error occurred: Error executing job with overrides: ['+dataset=[cars]'] Traceback (most recent call last): File "D:\Code\splatter-image\train_network.py", line 296, in main vis_data["origin_distances"][:, :cfg.data.input_images, ...]], KeyError: 'origin_distances'

linmi1 commented 7 months ago

I check the config there is no a variable named anneal_opacity in cfg.model.

Hi, have you solved this problem? I tried to modify the default_config.yaml file, but a new error occurred: Error executing job with overrides: ['+dataset=[cars]'] Traceback (most recent call last): File "D:\Code\splatter-image\train_network.py", line 296, in main vis_data["origin_distances"][:, :cfg.data.input_images, ...]], KeyError: 'origin_distances'

I haven't solve it .and Can I see your default_config.yaml file?
for your question ,see the origin_distances ,vis_data ,vis_data = next(iter(test_dataloader)),test_dataset = get_dataset(cfg, "test"),maybe it caused by the problem from testdataset Good luck!

szymanowiczs commented 7 months ago

Hi, indeed there was a mistake - I now removed the deprecated config options. Let me know if you encounter any further problems.

linmi1 commented 7 months ago

ok,I run the train_network successful. It shows: Beginning training [03/01 19:20:36] but no any other reaction ,I wonder if it means running the code correctly.

there is some tips for running: 1 update the configs/default_config.yaml by adding the parameters from configs/ 2 use the path of dataset like ./dataset/cars/srn_car/cars_test,or there will be index dimension error

linmi1 commented 7 months ago

after about half hour ,error like this ,what's the problem? Loading model from: /home/stu/anaconda3/envs/g_s/lib/python3.7/site-packages/lpips/weights/v0.1/vgg.pth [03/01 20:36:42] 57%|███████████████████████▌ | 404/704 [13:32<10:03, 2.01s/it] Error executing job with overrides: ['+dataset=[cars]'] Traceback (most recent call last): File "train_network.py", line 372, in main() File "/home/stu/anaconda3/envs/g_s/lib/python3.7/site-packages/hydra/main.py", line 99, in decorated_main config_name=config_name, File "/home/stu/anaconda3/envs/g_s/lib/python3.7/site-packages/hydra/_internal/utils.py", line 401, in _run_hydra overrides=overrides, File "/home/stu/anaconda3/envs/g_s/lib/python3.7/site-packages/hydra/_internal/utils.py", line 458, in _run_app lambda: hydra.run( File "/home/stu/anaconda3/envs/g_s/lib/python3.7/site-packages/hydra/_internal/utils.py", line 223, in run_and_report raise ex File "/home/stu/anaconda3/envs/g_s/lib/python3.7/site-packages/hydra/_internal/utils.py", line 220, in run_and_report return func() File "/home/stu/anaconda3/envs/g_s/lib/python3.7/site-packages/hydra/_internal/utils.py", line 461, in overrides=overrides, File "/home/stu/anaconda3/envs/g_s/lib/python3.7/site-packages/hydra/internal/hydra.py", line 132, in run = ret.return_value File "/home/stu/anaconda3/envs/g_s/lib/python3.7/site-packages/hydra/core/utils.py", line 260, in return_value raise self._return_value File "/home/stu/anaconda3/envs/g_s/lib/python3.7/site-packages/hydra/core/utils.py", line 186, in run_job ret.return_value = task_function(task_cfg) File "train_network.py", line 332, in main model_cfg=cfg) File "/home/stu/anaconda3/envs/g_s/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/home/stu/linmi/splatter-image/eval.py", line 66, in evaluate_dataset data = {k: v.to(device) for k, v in data.items()} File "/home/stu/linmi/splatter-image/eval.py", line 66, in data = {k: v.to(device) for k, v in data.items()} File "/home/stu/anaconda3/envs/g_s/lib/python3.7/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler _error_if_any_worker_fails() RuntimeError: DataLoader worker (pid 1618648) is killed by signal: Killed. wandb: wandb: Run history: wandb: training_l12_loss █▇▇▅▄▆▄▄▄▄▄▅▅▅▆▅▄▅▆▅▄▄▂▃▃▃▄▅▅▃▄▄▃▄▃▃▃▄▃▁ wandb: training_loss █▇▇▅▄▆▄▄▄▄▄▅▅▅▆▅▄▅▆▅▄▄▂▃▃▃▄▅▅▃▄▄▃▄▃▃▃▄▃▁ wandb: training_lpips_loss ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: wandb: Run summary: wandb: training_l12_loss -1.98298 wandb: training_loss -1.98298 wandb: training_lpips_loss -8.0 wandb: wandb: 🚀 View run twilight-thunder-18 at: https://wandb.ai/beautiformer/gs_pred/runs/0l4mm66d wandb: Synced 5 W&B file(s), 4 media file(s), 0 artifact file(s) and 0 other file(s) wandb: Find logs at: ./wandb/run-20240103_192019-0l4mm66d/logs

mengxuyiGit commented 7 months ago

@linmi1 I have encoutered similar things during training: the process will be killed at some point, and the lpips loss stays at -8 without changes during training:

image
szymanowiczs commented 7 months ago

Seems like there is an issue with loading data - I suggest checking the data in the folder that causes this behaviour.