I was having trouble running the Evaluation part, and it reported a strange error: ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (407634,) + inhomogeneous part.
Command line input:
CUDA_VISIBLE_DEVICES=0 python3 -m src.main +experiment=co3d_hydrant mode=test dataset/view_sampler=evaluation dataset.view_sampler.index_path=assets/evaluation_index/co3d_hydrant_extra.json checkpointing.load=checkpoints/co3d_hydrant.ckpt
Command-line output:
`
Saving outputs to /workspace/latentsplat/outputs/2024-08-27/12-42-46.949283.
rm: cannot remove '/workspace/latentsplat/outputs/latest-run': Is a directory
rm: cannot remove 'outputs/local': No such file or directory
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
Using cache found in /root/.cache/torch/hub/facebookresearch_dino_main
/workspace/latentsplat/src/main.py(125)train()
-> kwargs = dict(
(Pdb) c
Setting up [LPIPS] perceptual loss: trunk [vgg], v[0.1], spatial [off]
[2024-08-27 12:42:55,016][py.warnings][WARNING] - /opt/conda/envs/latentsplat/lib/python3.10/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
warnings.warn(
[2024-08-27 12:42:55,016][py.warnings][WARNING] - /opt/conda/envs/latentsplat/lib/python3.10/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or None for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing weights=VGG16_Weights.IMAGENET1K_V1. You can also use weights=VGG16_Weights.DEFAULT to get the most up-to-date weights.
warnings.warn(msg)
Loading model from: /opt/conda/envs/latentsplat/lib/python3.10/site-packages/lpips/weights/v0.1/vgg.pth
Setting up [LPIPS] perceptual loss: trunk [vgg], v[0.1], spatial [off]
Loading model from: /opt/conda/envs/latentsplat/lib/python3.10/site-packages/lpips/weights/v0.1/vgg.pth
You are using a CUDA device ('NVIDIA GeForce RTX 4090') that has Tensor Cores. To properly utilize them, you should set torch.set_float32_matmul_precision('medium' | 'high') which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision
Restoring states from the checkpoint path at checkpoints/co3d_hydrant.ckpt
[2024-08-27 12:43:00,043][py.warnings][WARNING] - /opt/conda/envs/latentsplat/lib/python3.10/site-packages/pytorch_lightning/utilities/migration/utils.py:56: The loaded checkpoint was produced with Lightning v2.2.0.post0, which is newer than your current Lightning version: v2.2.0
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Loaded model weights from the checkpoint at checkpoints/co3d_hydrant.ckpt
Loading CO3D category hydrant [1/1].
loading from this datasets/hydrant/frame_annotations.jgz
Error executing job with overrides: ['+experiment=co3d_hydrant', 'mode=test', 'dataset/view_sampler=evaluation', 'dataset.view_sampler.index_path=assets/evaluation_index/co3d_hydrant_extra.json', 'checkpointing.load=checkpoints/co3d_hydrant.ckpt']
Traceback (most recent call last):
File "/workspace/latentsplat/src/main.py", line 159, in train
trainer.test(model_wrapper, datamodule=data_module, ckpt_path=checkpoint_path)
File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 753, in test
return call._call_and_handle_interrupt(
File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 44, in _call_and_handle_interrupt
return trainer_fn(*args, kwargs)
File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 793, in _test_impl
results = self._run(model, ckpt_path=ckpt_path)
File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 986, in _run
results = self._run_stage()
File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1025, in _run_stage
return self._evaluation_loop.run()
File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/pytorch_lightning/loops/utilities.py", line 182, in _decorator
return loop_run(self, *args, *kwargs)
File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/pytorch_lightning/loops/evaluation_loop.py", line 110, in run
self.setup_data()
File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/pytorch_lightning/loops/evaluation_loop.py", line 166, in setup_data
dataloaders = _request_dataloader(source)
File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py", line 342, in _request_dataloader
return data_source.dataloader()
File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py", line 309, in dataloader
return call._call_lightning_datamodule_hook(self.instance.trainer, self.name)
File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 179, in _call_lightning_datamodule_hook
return fn(args, kwargs)
File "/workspace/latentsplat/src/dataset/data_module.py", line 113, in test_dataloader
dataset = get_dataset(self.dataset_cfg, "test", self.step_tracker)
File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/jaxtyping/_decorator.py", line 409, in wrapped_fn
out = fn(*args, kwargs)
File "/workspace/latentsplat/src/dataset/init.py", line 31, in get_dataset
return DATASETS[cfg.name](cfg, stage, view_sampler, force_shuffle)
File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/jaxtyping/_decorator.py", line 409, in wrapped_fn
out = fn(*args, kwargs)
File "/workspace/latentsplat/src/dataset/dataset_co3d.py", line 66, in init
self.dataset = self.get_dataset()
File "/workspace/latentsplat/src/dataset/dataset_co3d.py", line 128, in get_dataset
category_frame_annotations = data_types.load_dataclass_jgzip(
File "/root/.cache/latentsplat/co3dv2/co3d/dataset/data_types.py", line 344, in load_dataclass_jgzip
return load_dataclass(cast(IO, f), cls, binary=True)
File "/root/.cache/latentsplat/co3dv2/co3d/dataset/data_types.py", line 160, in load_dataclass
res = list(_dataclass_list_from_dict_list(asdict, cls))
File "/root/.cache/latentsplat/co3dv2/co3d/dataset/data_types.py", line 260, in _dataclass_list_from_dict_list
transposed = zip(key_lists)
File "/root/.cache/latentsplat/co3dv2/co3d/dataset/data_types.py", line 257, in
_dataclass_list_from_dictlist([obj.get(k, default) for obj in dlist], type)
File "/root/.cache/latentsplat/co3dv2/co3d/dataset/data_types.py", line 242, in _dataclass_list_from_dict_list
vals = np.split(list(all_vals_res), indices[:-1])
File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/numpy/lib/shape_base.py", line 866, in split
return array_split(ary, indices_or_sections, axis)
File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/numpy/lib/shape_base.py", line 778, in array_split
sary = _nx.swapaxes(ary, axis, 0)
File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/numpy/core/fromnumeric.py", line 581, in swapaxes
return _wrapfunc(a, 'swapaxes', axis1, axis2)
File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/numpy/core/fromnumeric.py", line 56, in _wrapfunc
return _wrapit(obj, method, args, kwds)
File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/numpy/core/fromnumeric.py", line 45, in _wrapit
result = getattr(asarray(obj), method)(*args, kwds)
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (407634,) + inhomogeneous part.
Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
`
I was very confused. When I searched the relevant information on the network, I found that this error might be related to the version of the numpy package, but there is no numpy in the latent environment. Besides, I ran the evaluation code on the re10k data set normally, but ran this code on the co3d data set and reported an error.I would appreciate it if you can tell me if you have any ideas
I was having trouble running the Evaluation part, and it reported a strange error: ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (407634,) + inhomogeneous part. Command line input:
CUDA_VISIBLE_DEVICES=0 python3 -m src.main +experiment=co3d_hydrant mode=test dataset/view_sampler=evaluation dataset.view_sampler.index_path=assets/evaluation_index/co3d_hydrant_extra.json checkpointing.load=checkpoints/co3d_hydrant.ckpt
Command-line output: ` Saving outputs to /workspace/latentsplat/outputs/2024-08-27/12-42-46.949283. rm: cannot remove '/workspace/latentsplat/outputs/latest-run': Is a directory rm: cannot remove 'outputs/local': No such file or directory GPU available: True (cuda), used: True TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs HPU available: False, using: 0 HPUs Using cache found in /root/.cache/torch/hub/facebookresearch_dino_main[2024-08-27 12:42:55,016][py.warnings][WARNING] - /opt/conda/envs/latentsplat/lib/python3.10/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or
None
for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passingweights=VGG16_Weights.IMAGENET1K_V1
. You can also useweights=VGG16_Weights.DEFAULT
to get the most up-to-date weights. warnings.warn(msg)Loading model from: /opt/conda/envs/latentsplat/lib/python3.10/site-packages/lpips/weights/v0.1/vgg.pth Setting up [LPIPS] perceptual loss: trunk [vgg], v[0.1], spatial [off] Loading model from: /opt/conda/envs/latentsplat/lib/python3.10/site-packages/lpips/weights/v0.1/vgg.pth You are using a CUDA device ('NVIDIA GeForce RTX 4090') that has Tensor Cores. To properly utilize them, you should set
torch.set_float32_matmul_precision('medium' | 'high')
which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision Restoring states from the checkpoint path at checkpoints/co3d_hydrant.ckpt [2024-08-27 12:43:00,043][py.warnings][WARNING] - /opt/conda/envs/latentsplat/lib/python3.10/site-packages/pytorch_lightning/utilities/migration/utils.py:56: The loaded checkpoint was produced with Lightning v2.2.0.post0, which is newer than your current Lightning version: v2.2.0LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0] Loaded model weights from the checkpoint at checkpoints/co3d_hydrant.ckpt Loading CO3D category hydrant [1/1]. loading from this datasets/hydrant/frame_annotations.jgz Error executing job with overrides: ['+experiment=co3d_hydrant', 'mode=test', 'dataset/view_sampler=evaluation', 'dataset.view_sampler.index_path=assets/evaluation_index/co3d_hydrant_extra.json', 'checkpointing.load=checkpoints/co3d_hydrant.ckpt'] Traceback (most recent call last): File "/workspace/latentsplat/src/main.py", line 159, in train trainer.test(model_wrapper, datamodule=data_module, ckpt_path=checkpoint_path) File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 753, in test return call._call_and_handle_interrupt( File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 44, in _call_and_handle_interrupt return trainer_fn(*args, kwargs) File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 793, in _test_impl results = self._run(model, ckpt_path=ckpt_path) File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 986, in _run results = self._run_stage() File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1025, in _run_stage return self._evaluation_loop.run() File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/pytorch_lightning/loops/utilities.py", line 182, in _decorator return loop_run(self, *args, *kwargs) File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/pytorch_lightning/loops/evaluation_loop.py", line 110, in run self.setup_data() File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/pytorch_lightning/loops/evaluation_loop.py", line 166, in setup_data dataloaders = _request_dataloader(source) File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py", line 342, in _request_dataloader return data_source.dataloader() File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py", line 309, in dataloader return call._call_lightning_datamodule_hook(self.instance.trainer, self.name) File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 179, in _call_lightning_datamodule_hook return fn(args, kwargs) File "/workspace/latentsplat/src/dataset/data_module.py", line 113, in test_dataloader dataset = get_dataset(self.dataset_cfg, "test", self.step_tracker) File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/jaxtyping/_decorator.py", line 409, in wrapped_fn out = fn(*args, kwargs) File "/workspace/latentsplat/src/dataset/init.py", line 31, in get_dataset return DATASETS[cfg.name](cfg, stage, view_sampler, force_shuffle) File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/jaxtyping/_decorator.py", line 409, in wrapped_fn out = fn(*args, kwargs) File "/workspace/latentsplat/src/dataset/dataset_co3d.py", line 66, in init self.dataset = self.get_dataset() File "/workspace/latentsplat/src/dataset/dataset_co3d.py", line 128, in get_dataset category_frame_annotations = data_types.load_dataclass_jgzip( File "/root/.cache/latentsplat/co3dv2/co3d/dataset/data_types.py", line 344, in load_dataclass_jgzip return load_dataclass(cast(IO, f), cls, binary=True) File "/root/.cache/latentsplat/co3dv2/co3d/dataset/data_types.py", line 160, in load_dataclass res = list(_dataclass_list_from_dict_list(asdict, cls)) File "/root/.cache/latentsplat/co3dv2/co3d/dataset/data_types.py", line 260, in _dataclass_list_from_dict_list transposed = zip(key_lists) File "/root/.cache/latentsplat/co3dv2/co3d/dataset/data_types.py", line 257, in
_dataclass_list_from_dictlist([obj.get(k, default) for obj in dlist], type)
File "/root/.cache/latentsplat/co3dv2/co3d/dataset/data_types.py", line 242, in _dataclass_list_from_dict_list
vals = np.split(list(all_vals_res), indices[:-1])
File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/numpy/lib/shape_base.py", line 866, in split
return array_split(ary, indices_or_sections, axis)
File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/numpy/lib/shape_base.py", line 778, in array_split
sary = _nx.swapaxes(ary, axis, 0)
File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/numpy/core/fromnumeric.py", line 581, in swapaxes
return _wrapfunc(a, 'swapaxes', axis1, axis2)
File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/numpy/core/fromnumeric.py", line 56, in _wrapfunc
return _wrapit(obj, method, args, kwds)
File "/opt/conda/envs/latentsplat/lib/python3.10/site-packages/numpy/core/fromnumeric.py", line 45, in _wrapit
result = getattr(asarray(obj), method)(*args, kwds)
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (407634,) + inhomogeneous part.
Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace. `
I was very confused. When I searched the relevant information on the network, I found that this error might be related to the version of the numpy package, but there is no numpy in the latent environment. Besides, I ran the evaluation code on the re10k data set normally, but ran this code on the co3d data set and reported an error.I would appreciate it if you can tell me if you have any ideas