kangpeilun / VastGaussian

This is an unofficial Implementation
Apache License 2.0
360 stars 26 forks source link

ValueError: need at least one array to concatenate #30

Open gwen233666 opened 3 months ago

gwen233666 commented 3 months ago

Hello, thank you for your excellent work, I encountered the following problems during operation, could you please give me some advice?

C:\ProgramData\anaconda3\envs\3dgsvast\python.exe D:\ZWD\3DGS\VastGaussian-refactor\train_images.py Output folder: ./output\exp1_16 [11/08 11:55:22] Tensorboard not available: not logging progress [11/08 11:55:22] Reading camera 406/406 [11/08 11:55:23] Partition 1_1 ori_camera_bbox [-3.5779102, -1.1316292, -5.5089555, -3.4412055] extend_camera_bbox [-4.067166376113891, -0.6423730373382568, -5.922505474090576, -3.027655506134033] [11/08 11:55:32] Partition 1_2 ori_camera_bbox [-4.037946, -1.1316292, -3.4412055, -0.27672225] extend_camera_bbox [-4.619209623336792, -0.5503658294677735, -4.074102163314819, 0.35617440938949585] [11/08 11:55:32] Partition 1_3 ori_camera_bbox [-4.242634, -1.1316292, -0.27672225, 3.034645] extend_camera_bbox [-4.86483473777771, -0.509428310394287, -0.9389957070350647, 3.6969185352325438] [11/08 11:55:33] Partition 2_1 ori_camera_bbox [-1.1316292, 0.82256436, -5.2049623, -2.3322802] extend_camera_bbox [-1.5224679470062257, 1.213403081893921, -5.779498672485351, -1.757743740081787] [11/08 11:55:33] Partition 2_2 ori_camera_bbox [-1.1316292, 0.82256436, -2.3322802, -0.42812225] extend_camera_bbox [-1.5224679470062257, 1.213403081893921, -2.713111734390259, -0.04729067683219906] [11/08 11:55:33] Partition 2_3 ori_camera_bbox [-1.1316292, 0.82256436, -0.42812225, 3.082469] extend_camera_bbox [-1.5224679470062257, 1.213403081893921, -1.1302405059337617, 3.7845872402191163] [11/08 11:55:33] Partition 3_1 ori_camera_bbox [0.82256436, 5.353721, -5.3446984, -3.2705412] extend_camera_bbox [-0.0836669445037842, 6.259952449798584, -5.759529876708984, -2.855709743499756] [11/08 11:55:33] Partition 3_2 ori_camera_bbox [0.82256436, 5.335647, -3.2705412, -0.08891543] extend_camera_bbox [-0.08005213737487793, 6.2382636070251465, -3.906866359710693, 0.5474097386002541] [11/08 11:55:33] Partition 3_3 ori_camera_bbox [0.82256436, 4.920017, -0.08891543, 3.3565602] extend_camera_bbox [0.0030739307403564453, 5.73950719833374, -0.7780105456709863, 4.045655345916748] [11/08 11:55:34] Total ori point number: 364978 [11/08 11:55:34] Total before extend point number: 296381 [11/08 11:55:34] Total extend point number: 576671 [11/08 11:55:34] [11/08 11:55:34] Now processing partition i:1_1 and j:1_2 [11/08 11:55:37] Now processing partition i:1_1 and j:1_3 [11/08 11:55:37] Now processing partition i:1_1 and j:2_1 [11/08 11:55:37] Now processing partition i:1_1 and j:2_2 [11/08 11:55:37] Now processing partition i:1_1 and j:2_3 [11/08 11:55:38] Now processing partition i:1_1 and j:3_1 [11/08 11:55:38] Now processing partition i:1_1 and j:3_2 [11/08 11:55:38] Now processing partition i:1_1 and j:3_3 [11/08 11:55:38] Now processing partition i:1_2 and j:1_1 [11/08 11:55:38] Now processing partition i:1_2 and j:1_3 [11/08 11:55:38] Now processing partition i:1_2 and j:2_1 [11/08 11:55:38] Now processing partition i:1_2 and j:2_2 [11/08 11:55:38] Now processing partition i:1_2 and j:2_3 [11/08 11:55:38] Now processing partition i:1_2 and j:3_1 [11/08 11:55:38] Now processing partition i:1_2 and j:3_2 [11/08 11:55:38] Now processing partition i:1_2 and j:3_3 [11/08 11:55:39] Now processing partition i:1_3 and j:1_1 [11/08 11:55:39] Now processing partition i:1_3 and j:1_2 [11/08 11:55:39] Now processing partition i:1_3 and j:2_1 [11/08 11:55:40] Now processing partition i:1_3 and j:2_2 [11/08 11:55:40] Now processing partition i:1_3 and j:2_3 [11/08 11:55:40] Now processing partition i:1_3 and j:3_1 [11/08 11:55:40] Now processing partition i:1_3 and j:3_2 [11/08 11:55:40] Now processing partition i:1_3 and j:3_3 [11/08 11:55:40] Now processing partition i:2_1 and j:1_1 [11/08 11:55:41] Now processing partition i:2_1 and j:1_2 [11/08 11:55:41] Now processing partition i:2_1 and j:1_3 [11/08 11:55:41] Now processing partition i:2_1 and j:2_2 [11/08 11:55:41] Now processing partition i:2_1 and j:2_3 [11/08 11:55:41] Now processing partition i:2_1 and j:3_1 [11/08 11:55:41] Now processing partition i:2_1 and j:3_2 [11/08 11:55:41] Now processing partition i:2_1 and j:3_3 [11/08 11:55:41] Now processing partition i:2_2 and j:1_1 [11/08 11:55:42] Now processing partition i:2_2 and j:1_2 [11/08 11:55:42] Now processing partition i:2_2 and j:1_3 [11/08 11:55:43] Now processing partition i:2_2 and j:2_1 [11/08 11:55:43] Now processing partition i:2_2 and j:2_3 [11/08 11:55:43] Now processing partition i:2_2 and j:3_1 [11/08 11:55:43] Now processing partition i:2_2 and j:3_2 [11/08 11:55:43] Now processing partition i:2_2 and j:3_3 [11/08 11:55:43] Now processing partition i:2_3 and j:1_1 [11/08 11:55:44] Now processing partition i:2_3 and j:1_2 [11/08 11:55:44] Now processing partition i:2_3 and j:1_3 [11/08 11:55:44] Now processing partition i:2_3 and j:2_1 [11/08 11:55:44] Now processing partition i:2_3 and j:2_2 [11/08 11:55:44] Now processing partition i:2_3 and j:3_1 [11/08 11:55:44] Now processing partition i:2_3 and j:3_2 [11/08 11:55:45] Now processing partition i:2_3 and j:3_3 [11/08 11:55:45] Now processing partition i:3_1 and j:1_1 [11/08 11:55:46] Now processing partition i:3_1 and j:1_2 [11/08 11:55:46] Now processing partition i:3_1 and j:1_3 [11/08 11:55:46] Now processing partition i:3_1 and j:2_1 [11/08 11:55:46] Now processing partition i:3_1 and j:2_2 [11/08 11:55:46] Now processing partition i:3_1 and j:2_3 [11/08 11:55:47] Now processing partition i:3_1 and j:3_2 [11/08 11:55:47] Now processing partition i:3_1 and j:3_3 [11/08 11:55:47] Now processing partition i:3_2 and j:1_1 [11/08 11:55:47] Now processing partition i:3_2 and j:1_2 [11/08 11:55:48] Now processing partition i:3_2 and j:1_3 [11/08 11:55:48] Now processing partition i:3_2 and j:2_1 [11/08 11:55:48] Now processing partition i:3_2 and j:2_2 [11/08 11:55:48] Now processing partition i:3_2 and j:2_3 [11/08 11:55:48] Now processing partition i:3_2 and j:3_1 [11/08 11:55:48] Now processing partition i:3_2 and j:3_3 [11/08 11:55:48] Now processing partition i:3_3 and j:1_1 [11/08 11:55:50] Now processing partition i:3_3 and j:1_2 [11/08 11:55:50] Now processing partition i:3_3 and j:1_3 [11/08 11:55:50] Now processing partition i:3_3 and j:2_1 [11/08 11:55:50] Now processing partition i:3_3 and j:2_2 [11/08 11:55:50] Now processing partition i:3_3 and j:2_3 [11/08 11:55:50] Now processing partition i:3_3 and j:3_1 [11/08 11:55:50] Now processing partition i:3_3 and j:3_2 [11/08 11:55:51] Found 1 CUDA devices [11/08 11:55:53] train partition 1_1 on gpu 0 [11/08 11:55:53] Output folder: ./output\exp1_16 Tensorboard not available: not logging progress Process Partition_1_1: Traceback (most recent call last): File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 314, in _bootstrap self.run() File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, **self._kwargs) File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 177, in parallel_local_training training(lp_args, op_args, pp_args, test_iterations, save_iterations, checkpoint_iterations,start_checkpoint, debug_from, logger=logger) File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 50, in training scene = PartitionScene(dataset, gaussians) File "D:\ZWD\3DGS\VastGaussian-refactor\scene__init.py", line 119, in init scene_info = sceneLoadTypeCallbacks["ColmapVast"](args.source_path, args.partition_model_path, File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 354, in readColmapSceneInfoVast nerf_normalization = getNerfppNorm(train_cam_infos) # 使用找到在世界坐标系下相机的几何中心 File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 63, in getNerfppNorm center, diagonal = get_center_and_diag(cam_centers) File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 49, in get_center_and_diag cam_centers = np.hstack(cam_centers) File "<__array_function internals>", line 200, in hstack File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\site-packages\numpy\core\shape_base.py", line 370, in hstack return _nx.concatenate(arrs, 1, dtype=dtype, casting=casting) File "<__array_function__ internals>", line 200, in concatenate ValueError: need at least one array to concatenate

train partition 1_2 on gpu 0 [11/08 11:55:56] Output folder: ./output\exp1_16 Tensorboard not available: not logging progress

Process Partition_1_2: Traceback (most recent call last): File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 314, in _bootstrap self.run() File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, *self._kwargs) File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 177, in parallel_local_training training(lp_args, op_args, pp_args, test_iterations, save_iterations, checkpoint_iterations,start_checkpoint, debug_from, logger=logger) File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 50, in training scene = PartitionScene(dataset, gaussians) File "D:\ZWD\3DGS\VastGaussian-refactor\scene__init.py", line 119, in init scene_info = sceneLoadTypeCallbacks["ColmapVast"](args.source_path, args.partition_model_path, File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 354, in readColmapSceneInfoVast nerf_normalization = getNerfppNorm(train_cam_infos) # 使用找到在世界坐标系下相机的几何中心 File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 63, in getNerfppNorm center, diagonal = get_center_and_diag(cam_centers) File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 49, in get_center_and_diag cam_centers = np.hstack(cam_centers) File "<__array_function internals>", line 200, in hstack File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\site-packages\numpy\core\shape_base.py", line 370, in hstack return _nx.concatenate(arrs, 1, dtype=dtype, casting=casting) File "<__array_function__ internals>", line 200, in concatenate ValueError: need at least one array to concatenate train partition 1_3 on gpu 0 [11/08 11:55:59] Output folder: ./output\exp1_16 Tensorboard not available: not logging progress Process Partition_1_3: Traceback (most recent call last): File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 314, in _bootstrap self.run() File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 108, in run self._target(self._args, **self._kwargs) File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 177, in parallel_local_training training(lp_args, op_args, pp_args, test_iterations, save_iterations, checkpoint_iterations,start_checkpoint, debug_from, logger=logger) File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 50, in training scene = PartitionScene(dataset, gaussians) File "D:\ZWD\3DGS\VastGaussian-refactor\scene__init.py", line 119, in init scene_info = sceneLoadTypeCallbacks["ColmapVast"](args.source_path, args.partition_model_path, File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 354, in readColmapSceneInfoVast nerf_normalization = getNerfppNorm(train_cam_infos) # 使用找到在世界坐标系下相机的几何中心 File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 63, in getNerfppNorm center, diagonal = get_center_and_diag(cam_centers) File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 49, in get_center_and_diag cam_centers = np.hstack(cam_centers) File "<__array_function internals>", line 200, in hstack File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\site-packages\numpy\core\shape_base.py", line 370, in hstack return _nx.concatenate(arrs, 1, dtype=dtype, casting=casting) File "<__array_function__ internals>", line 200, in concatenate ValueError: need at least one array to concatenate

train partition 2_1 on gpu 0 [11/08 11:56:03] Output folder: ./output\exp1_16 Tensorboard not available: not logging progress Process Partition_2_1: Traceback (most recent call last): File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 314, in _bootstrap self.run() File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, **self._kwargs) File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 177, in parallel_local_training training(lp_args, op_args, pp_args, test_iterations, save_iterations, checkpoint_iterations,start_checkpoint, debug_from, logger=logger) File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 50, in training scene = PartitionScene(dataset, gaussians) File "D:\ZWD\3DGS\VastGaussian-refactor\scene__init.py", line 119, in init scene_info = sceneLoadTypeCallbacks["ColmapVast"](args.source_path, args.partition_model_path, File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 354, in readColmapSceneInfoVast nerf_normalization = getNerfppNorm(train_cam_infos) # 使用找到在世界坐标系下相机的几何中心 File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 63, in getNerfppNorm center, diagonal = get_center_and_diag(cam_centers) File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 49, in get_center_and_diag cam_centers = np.hstack(cam_centers) File "<__array_function internals>", line 200, in hstack File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\site-packages\numpy\core\shape_base.py", line 370, in hstack return _nx.concatenate(arrs, 1, dtype=dtype, casting=casting) File "<__array_function__ internals>", line 200, in concatenate ValueError: need at least one array to concatenate

train partition 2_2 on gpu 0 [11/08 11:56:07] Output folder: ./output\exp1_16 Tensorboard not available: not logging progress

Process Partition_2_2: Traceback (most recent call last): File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 314, in _bootstrap self.run() File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, *self._kwargs) File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 177, in parallel_local_training training(lp_args, op_args, pp_args, test_iterations, save_iterations, checkpoint_iterations,start_checkpoint, debug_from, logger=logger) File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 50, in training scene = PartitionScene(dataset, gaussians) File "D:\ZWD\3DGS\VastGaussian-refactor\scene__init.py", line 119, in init scene_info = sceneLoadTypeCallbacks["ColmapVast"](args.source_path, args.partition_model_path, File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 354, in readColmapSceneInfoVast nerf_normalization = getNerfppNorm(train_cam_infos) # 使用找到在世界坐标系下相机的几何中心 File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 63, in getNerfppNorm center, diagonal = get_center_and_diag(cam_centers) File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 49, in get_center_and_diag cam_centers = np.hstack(cam_centers) File "<__array_function internals>", line 200, in hstack File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\site-packages\numpy\core\shape_base.py", line 370, in hstack return _nx.concatenate(arrs, 1, dtype=dtype, casting=casting) File "<__array_function__ internals>", line 200, in concatenate ValueError: need at least one array to concatenate train partition 2_3 on gpu 0 [11/08 11:56:11] Output folder: ./output\exp1_16 Tensorboard not available: not logging progress Process Partition_2_3: Traceback (most recent call last): File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 314, in _bootstrap self.run() File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 108, in run self._target(self._args, **self._kwargs) File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 177, in parallel_local_training training(lp_args, op_args, pp_args, test_iterations, save_iterations, checkpoint_iterations,start_checkpoint, debug_from, logger=logger) File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 50, in training scene = PartitionScene(dataset, gaussians) File "D:\ZWD\3DGS\VastGaussian-refactor\scene__init.py", line 119, in init scene_info = sceneLoadTypeCallbacks["ColmapVast"](args.source_path, args.partition_model_path, File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 354, in readColmapSceneInfoVast nerf_normalization = getNerfppNorm(train_cam_infos) # 使用找到在世界坐标系下相机的几何中心 File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 63, in getNerfppNorm center, diagonal = get_center_and_diag(cam_centers) File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 49, in get_center_and_diag cam_centers = np.hstack(cam_centers) File "<__array_function internals>", line 200, in hstack File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\site-packages\numpy\core\shape_base.py", line 370, in hstack return _nx.concatenate(arrs, 1, dtype=dtype, casting=casting) File "<__array_function__ internals>", line 200, in concatenate ValueError: need at least one array to concatenate

train partition 3_1 on gpu 0 [11/08 11:56:14] Output folder: ./output\exp1_16 Tensorboard not available: not logging progress Process Partition_3_1: Traceback (most recent call last): File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 314, in _bootstrap self.run() File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, **self._kwargs) File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 177, in parallel_local_training training(lp_args, op_args, pp_args, test_iterations, save_iterations, checkpoint_iterations,start_checkpoint, debug_from, logger=logger) File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 50, in training scene = PartitionScene(dataset, gaussians) File "D:\ZWD\3DGS\VastGaussian-refactor\scene__init.py", line 119, in init scene_info = sceneLoadTypeCallbacks["ColmapVast"](args.source_path, args.partition_model_path, File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 354, in readColmapSceneInfoVast nerf_normalization = getNerfppNorm(train_cam_infos) # 使用找到在世界坐标系下相机的几何中心 File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 63, in getNerfppNorm center, diagonal = get_center_and_diag(cam_centers) File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 49, in get_center_and_diag cam_centers = np.hstack(cam_centers) File "<__array_function internals>", line 200, in hstack File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\site-packages\numpy\core\shape_base.py", line 370, in hstack return _nx.concatenate(arrs, 1, dtype=dtype, casting=casting) File "<__array_function__ internals>", line 200, in concatenate ValueError: need at least one array to concatenate

train partition 3_2 on gpu 0 [11/08 11:56:17] Output folder: ./output\exp1_16 Tensorboard not available: not logging progress

Process Partition_3_2: Traceback (most recent call last): File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 314, in _bootstrap self.run() File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, **self._kwargs) File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 177, in parallel_local_training training(lp_args, op_args, pp_args, test_iterations, save_iterations, checkpoint_iterations,start_checkpoint, debug_from, logger=logger) File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 50, in training scene = PartitionScene(dataset, gaussians) File "D:\ZWD\3DGS\VastGaussian-refactor\scene__init.py", line 119, in init scene_info = sceneLoadTypeCallbacks["ColmapVast"](args.source_path, args.partition_model_path, File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 354, in readColmapSceneInfoVast nerf_normalization = getNerfppNorm(train_cam_infos) # 使用找到在世界坐标系下相机的几何中心 File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 63, in getNerfppNorm center, diagonal = get_center_and_diag(cam_centers) File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 49, in get_center_and_diag cam_centers = np.hstack(cam_centers) File "<__array_function internals>", line 200, in hstack File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\site-packages\numpy\core\shape_base.py", line 370, in hstack return _nx.concatenate(arrs, 1, dtype=dtype, casting=casting) File "<__array_function__ internals>", line 200, in concatenate ValueError: need at least one array to concatenate train partition 3_3 on gpu 0 [11/08 11:56:21] Output folder: ./output\exp1_16 Tensorboard not available: not logging progress

Process Partition_3_3: Traceback (most recent call last): File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 314, in _bootstrap self.run() File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\multiprocessing\process.py", line 108, in run self._target(*self._args, **self._kwargs) File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 177, in parallel_local_training training(lp_args, op_args, pp_args, test_iterations, save_iterations, checkpoint_iterations,start_checkpoint, debug_from, logger=logger) File "D:\ZWD\3DGS\VastGaussian-refactor\train_vast.py", line 50, in training scene = PartitionScene(dataset, gaussians) File "D:\ZWD\3DGS\VastGaussian-refactor\scene__init.py", line 119, in init scene_info = sceneLoadTypeCallbacks["ColmapVast"](args.source_path, args.partition_model_path, File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 354, in readColmapSceneInfoVast nerf_normalization = getNerfppNorm(train_cam_infos) # 使用找到在世界坐标系下相机的几何中心 File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 63, in getNerfppNorm center, diagonal = get_center_and_diag(cam_centers) File "D:\ZWD\3DGS\VastGaussian-refactor\scene\dataset_readers.py", line 49, in get_center_and_diag cam_centers = np.hstack(cam_centers) File "<__array_function internals>", line 200, in hstack File "C:\ProgramData\anaconda3\envs\3dgsvast\lib\site-packages\numpy\core\shape_base.py", line 370, in hstack return _nx.concatenate(arrs, 1, dtype=dtype, casting=casting) File "<__array_function__ internals>", line 200, in concatenate ValueError: need at least one array to concatenate

Training complete. [11/08 11:56:24] Merging Partitions... [11/08 11:56:24] All Done! [11/08 11:56:24]

Process finished with exit code 0

gwen233666 commented 3 months ago

image Is this partition normal?I think this partition is done, but it seems there are some problems with the training.

gwen233666 commented 3 months ago

作者你好,我对曼哈顿对齐有一些疑惑,我不清楚怎样的结果才是对齐的,我猜测也许是没有进行很好的对齐,但是我也不确定。

ellie684 commented 3 months ago

I had the same error. In my case it was because Line 40 in utils/partition_utils.py hard codes the image type to be .jpg in the file: partition_point_cloud/visible/*_camera.txt.

gwen233666 commented 3 months ago

I had the same error. In my case it was because Line 40 in utils/partition_utils.py hard codes the image type to be .jpg in the file: partition_point_cloud/visible/*_camera.txt.

Thanks for the advice. You're the best. It really has nothing to do with Manhattan alignment.

gwen233666 commented 3 months ago

I had the same error. In my case it was because Line 40 in utils/partition_utils.py hard codes the image type to be .jpg in the file: partition_point_cloud/visible/*_camera.txt. image Hello buddy, I can train normally according to your suggestion, I have the same problem when running render.py, it is so strange, have you ever encountered such a situation?

DuHao55 commented 1 month ago

I'm having exactly the same problem as you, may I ask how you solved the error reported by render.py? @gwen233666