cvlab-kaist / GeCoNeRF

49 stars 1 forks source link

bug in match.py reprojector function #2

Open Marquess98 opened 1 year ago

Marquess98 commented 1 year ago

Hi,thanks for your work! Here I’ve encountered some problems when run your code. I think there are some mistakes in reprojector function in match.py file. looking forward to your update! Thank you! Cannot divide evenly the sizes of shapes (3, 16) and (14400, 16) File "/home/bran/GeCoNeRF-main/train.py", line 488, in app.run(main) File "/home/bran/GeCoNeRF-main/train.py", line 330, in main w_grad, outputs, keys = reg_utils.reg_train_step(render_trainfn, reg_model, dataset_type, 120, reg_type, keys[0], state, reg_params, reg_batch, alpha_batch, step) File "/home/bran/GeCoNeRF-main/internal/reg_utils.py", line 155, in reg_train_step (loss , outputs), grad = jax.value_and_grad(warp_loss_fn, has_aux = True)(jax.device_get(jax.tree_map(lambda x:x[0], state)).optimizer.target) File "/home/bran/GeCoNeRF-main/internal/reg_utils.py", line 93, in warp_loss_fn data_type = data_type File "/home/bran/GeCoNeRF-main/internal/models.py", line 377, in render_image rng, data_type, eval = eval) File "/home/bran/GeCoNeRF-main/internal/match.py", line 127, in reprojector pose = pose.reshape(rays.pose.shape),

The above exception was the direct cause of the following exception:

File "/home/bran/GeCoNeRF-main/internal/match.py", line 127, in reprojector pose = pose.reshape(rays.pose.shape), File "/home/bran/GeCoNeRF-main/internal/models.py", line 377, in render_image rng, data_type, eval = eval) File "/home/bran/GeCoNeRF-main/internal/reg_utils.py", line 93, in warp_loss_fn data_type = data_type File "/home/bran/GeCoNeRF-main/internal/reg_utils.py", line 155, in reg_train_step (loss , outputs), grad = jax.value_and_grad(warp_loss_fn, has_aux = True)(jax.device_get(jax.tree_map(lambda x:x[0], state)).optimizer.target) File "/home/bran/GeCoNeRF-main/train.py", line 330, in main w_grad, outputs, keys = reg_utils.reg_train_step(render_trainfn, reg_model, dataset_type, 120, reg_type, keys[0], state, reg_params, reg_batch, alpha_batch, step) File "/home/bran/GeCoNeRF-main/train.py", line 488, in app.run(main)

Marquess98 commented 1 year ago

I think set eval = False can solve it

Marquess98 commented 1 year ago

Sorry to bother. There is another problem, the error reporting are as follows. looking forward to your reply, thank you!

I1013 17:02:48.312762 140074248000064 checkpoints.py:249] Found no checkpoint files in /home/bran/GeCoNeRF-main/geco_train/orchids 200/85000: i_loss=0.2004, rgb_loss=0.0000, reg_loss=0.0000, dec_reg_loss=0.0000, avg_loss=0.2282, weight_l2=0.00e+00, lr=6.63e-05, 1700957 rays/sec 400/85000: i_loss=0.1800, rgb_loss=0.0000, reg_loss=0.0000, dec_reg_loss=0.0000, avg_loss=0.1830, weight_l2=0.00e+00, lr=1.25e-04, 1999249 rays/sec 600/85000: i_loss=0.1679, rgb_loss=0.0000, reg_loss=0.0000, dec_reg_loss=0.0000, avg_loss=0.1675, weight_l2=0.00e+00, lr=1.81e-04, 1518354 rays/sec 800/85000: i_loss=0.1484, rgb_loss=0.0000, reg_loss=0.0000, dec_reg_loss=0.0000, avg_loss=0.1586, weight_l2=0.00e+00, lr=2.33e-04, 1535362 rays/sec Traceback (most recent call last): File "train.py", line 488, in app.run(main) File "/home/bran/anaconda3/envs/regnerf/lib/python3.6/site-packages/absl/app.py", line 312, in run _run_main(main, args) File "/home/bran/anaconda3/envs/regnerf/lib/python3.6/site-packages/absl/app.py", line 258, in _run_main sys.exit(main(argv)) File "train.py", line 314, in main grad, stats, keys = train_pstep(keys, state, batch, alpha) File "/home/bran/anaconda3/envs/regnerf/lib/python3.6/site-packages/jax/_src/traceback_util.py", line 183, in reraise_with_filtered_traceback return fun(*args, **kwargs) File "/home/bran/anaconda3/envs/regnerf/lib/python3.6/site-packages/jax/_src/api.py", line 1622, in f_pmapped local_axis_size = _mapped_axis_size(in_tree, args, in_axes_flat, "pmap", kws=True) File "/home/bran/anaconda3/envs/regnerf/lib/python3.6/site-packages/jax/_src/api.py", line 1348, in _mapped_axis_size raise ValueError(msg.format(f"the tree of axis sizes is:\n{sizes}")) from None jax._src.traceback_util.UnfilteredStackTrace: ValueError: pmap got inconsistent sizes for array axes to be mapped: the tree of axis sizes is: (2, TrainState(optimizer=Optimizer(optimizer_def=<flax.optim.adam.Adam object at 0x7f639c37f1d0>, state=OptimizerState(step=1, param_states=FrozenDict({ params: { MLP_0: { Dense_0: { bias: _AdamParamState(grad_ema=1, grad_sq_ema=1), kernel: _AdamParamState(grad_ema=1, grad_sq_ema=1), }, Dense_1: { bias: _AdamParamState(grad_ema=1, grad_sq_ema=1), kernel: _AdamParamState(grad_ema=1, grad_sq_ema=1), }, Dense_10: { bias: _AdamParamState(grad_ema=1, grad_sq_ema=1), kernel: _AdamParamState(grad_ema=1, grad_sq_ema=1), }, Dense_11: { bias: _AdamParamState(grad_ema=1, grad_sq_ema=1), kernel: _AdamParamState(grad_ema=1, grad_sq_ema=1), }, Dense_2: { bias: _AdamParamState(grad_ema=1, grad_sq_ema=1), kernel: _AdamParamState(grad_ema=1, grad_sq_ema=1), }, Dense_3: { bias: _AdamParamState(grad_ema=1, grad_sq_ema=1), kernel: _AdamParamState(grad_ema=1, grad_sq_ema=1), }, Dense_4: { bias: _AdamParamState(grad_ema=1, grad_sq_ema=1), kernel: _AdamParamState(grad_ema=1, grad_sq_ema=1), }, Dense_5: { bias: _AdamParamState(grad_ema=1, grad_sq_ema=1), kernel: _AdamParamState(grad_ema=1, grad_sq_ema=1), }, Dense_6: { bias: _AdamParamState(grad_ema=1, grad_sq_ema=1), kernel: _AdamParamState(grad_ema=1, grad_sq_ema=1), }, Dense_7: { bias: _AdamParamState(grad_ema=1, grad_sq_ema=1), kernel: _AdamParamState(grad_ema=1, grad_sq_ema=1), }, Dense_8: { bias: _AdamParamState(grad_ema=1, grad_sq_ema=1), kernel: _AdamParamState(grad_ema=1, grad_sq_ema=1), }, Dense_9: { bias: _AdamParamState(grad_ema=1, grad_sq_ema=1), kernel: _AdamParamState(grad_ema=1, grad_sq_ema=1), }, }, }, })), target=FrozenDict({ params: { MLP_0: { Dense_0: { bias: 1, kernel: 1, }, Dense_1: { bias: 1, kernel: 1, }, Dense_10: { bias: 1, kernel: 1, }, Dense_11: { bias: 1, kernel: 1, }, Dense_2: { bias: 1, kernel: 1, }, Dense_3: { bias: 1, kernel: 1, }, Dense_4: { bias: 1, kernel: 1, }, Dense_5: { bias: 1, kernel: 1, }, Dense_6: { bias: 1, kernel: 1, }, Dense_7: { bias: 1, kernel: 1, }, Dense_8: { bias: 1, kernel: 1, }, Dense_9: { bias: 1, kernel: 1, }, }, }, }))), {'pixels': 1, 'rays': Rays(origins=1, directions=1, viewdirs=1, pose=1, radii=1, lossmult=1, near=1, far=1)}, None)

The stack trace below excludes JAX-internal frames. The preceding is the original exception that occurred, unmodified.


The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "train.py", line 488, in app.run(main) File "/home/bran/anaconda3/envs/regnerf/lib/python3.6/site-packages/absl/app.py", line 312, in run _run_main(main, args) File "/home/bran/anaconda3/envs/regnerf/lib/python3.6/site-packages/absl/app.py", line 258, in _run_main sys.exit(main(argv)) File "train.py", line 314, in main grad, stats, keys = train_pstep(keys, state, batch, alpha) File "/home/bran/anaconda3/envs/regnerf/lib/python3.6/site-packages/jax/_src/api.py", line 1348, in _mapped_axis_size raise ValueError(msg.format(f"the tree of axis sizes is:\n{sizes}")) from None ValueError: pmap got inconsistent sizes for array axes to be mapped: the tree of axis sizes is: (2, TrainState(optimizer=Optimizer(optimizer_def=<flax.optim.adam.Adam object at 0x7f639c37f1d0>, state=OptimizerState(step=1, param_states=FrozenDict({ params: { MLP_0: { Dense_0: { bias: _AdamParamState(grad_ema=1, grad_sq_ema=1), kernel: _AdamParamState(grad_ema=1, grad_sq_ema=1), }, Dense_1: { bias: _AdamParamState(grad_ema=1, grad_sq_ema=1), kernel: _AdamParamState(grad_ema=1, grad_sq_ema=1), }, Dense_10: { bias: _AdamParamState(grad_ema=1, grad_sq_ema=1), kernel: _AdamParamState(grad_ema=1, grad_sq_ema=1), }, Dense_11: { bias: _AdamParamState(grad_ema=1, grad_sq_ema=1), kernel: _AdamParamState(grad_ema=1, grad_sq_ema=1), }, Dense_2: { bias: _AdamParamState(grad_ema=1, grad_sq_ema=1), kernel: _AdamParamState(grad_ema=1, grad_sq_ema=1), }, Dense_3: { bias: _AdamParamState(grad_ema=1, grad_sq_ema=1), kernel: _AdamParamState(grad_ema=1, grad_sq_ema=1), }, Dense_4: { bias: _AdamParamState(grad_ema=1, grad_sq_ema=1), kernel: _AdamParamState(grad_ema=1, grad_sq_ema=1), }, Dense_5: { bias: _AdamParamState(grad_ema=1, grad_sq_ema=1), kernel: _AdamParamState(grad_ema=1, grad_sq_ema=1), }, Dense_6: { bias: _AdamParamState(grad_ema=1, grad_sq_ema=1), kernel: _AdamParamState(grad_ema=1, grad_sq_ema=1), }, Dense_7: { bias: _AdamParamState(grad_ema=1, grad_sq_ema=1), kernel: _AdamParamState(grad_ema=1, grad_sq_ema=1), }, Dense_8: { bias: _AdamParamState(grad_ema=1, grad_sq_ema=1), kernel: _AdamParamState(grad_ema=1, grad_sq_ema=1), }, Dense_9: { bias: _AdamParamState(grad_ema=1, grad_sq_ema=1), kernel: _AdamParamState(grad_ema=1, grad_sq_ema=1), }, }, }, })), target=FrozenDict({ params: { MLP_0: { Dense_0: { bias: 1, kernel: 1, }, Dense_1: { bias: 1, kernel: 1, }, Dense_10: { bias: 1, kernel: 1, }, Dense_11: { bias: 1, kernel: 1, }, Dense_2: { bias: 1, kernel: 1, }, Dense_3: { bias: 1, kernel: 1, }, Dense_4: { bias: 1, kernel: 1, }, Dense_5: { bias: 1, kernel: 1, }, Dense_6: { bias: 1, kernel: 1, }, Dense_7: { bias: 1, kernel: 1, }, Dense_8: { bias: 1, kernel: 1, }, Dense_9: { bias: 1, kernel: 1, }, }, }, }))), {'pixels': 1, 'rays': Rays(origins=1, directions=1, viewdirs=1, pose=1, radii=1, lossmult=1, near=1, far=1)}, None)

xuyx55 commented 9 months ago

@Marquess98 Hi! I have the same problem when I run the codes. Have you fixed this?