Open ZhenyuSun-Walker opened 2 months ago
I use other models to generate the point cloud from the same input dataset, and try to use it as initialization of gaussian optimization in mvsgs. However, the mistakes above happens. How can I fix the problem?
Sorry for the late reply. We load the point cloud as initialization here. Please check it.
OK, sir. So you provide me the specific paragraph where I can change the loading directory. However, I found that if I change the point cloud file into the .ply file that I generated throuh other methods, the error
Traceback (most recent call last): File "lib/train.py", line 236, in <module> training(lp.extract(args), op.extract(args), pp.extract(args), args.test_iterations, args.save_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from, init_ply=args.init_ply) File "lib/train.py", line 67, in training scene = Scene(dataset, gaussians, init_ply=init_ply) File "/home/sunzhenyu/Projects/MVSGaussian/lib/scene/__init__.py", line 86, in __init__ self.gaussians.create_from_pcd(scene_info.point_cloud, self.cameras_extent) File "/home/sunzhenyu/Projects/MVSGaussian/lib/scene/gaussian_model.py", line 129, in create_from_pcd fused_point_cloud = torch.tensor(np.asarray(pcd.points)).float().cuda() AttributeError: 'NoneType' object has no attribute 'points'
came up. It seems that there are some constraints while utilizing the .ply file as the initilization of gaussian optimization, and simply changin the original .ply file into another version does not work well.
Anticipating for your earliest reply!
Howerver, I simply put the new version of the .ply file into the place where the original version should be, and I change the name to avoid the directory error. However, the nonetype error still occuirs.
From your description, can I infer that: As long as the point cloud .ply file is obtained from the images of the four target views in the input dataset, and its path is correct, it can be optimized using the Gaussian optimization method of MVSGaussian. The point cloud does not necessarily have to be obtained from the previous run.py, right?
Hello sir, I wonder how can I use the mvsgs to optimize the .ply file processed by other modeI. I met the error:
(mvsgs) $ python lib/train.py --eval --iterations 3000 -s dataset/mvdif_demo_150_20i -p mvsgs_pointcloud/dtu_pretrain Optimizing ./output/mvdif_demo_150_20i Output folder: ./output/mvdif_demo_150_20i Tensorboard not available: not logging progress Reading camera 20/20 mvsgs_pointcloud/dtu_pretrain/mvdif_demo_150_20i/mvdif_demo_150_20i.ply Loading Training Cameras [ INFO ] Encountered quite large input images (>1.6K pixels width), rescaling to 1.6K. If this is not desired, please explicitly specify '--resolution/-r' as 1 Loading Test Cameras Traceback (most recent call last): File "lib/train.py", line 236, in <module> training(lp.extract(args), op.extract(args), pp.extract(args), args.test_iterations, args.save_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from, init_ply=args.init_ply) File "lib/train.py", line 67, in training scene = Scene(dataset, gaussians, init_ply=init_ply) File "/home/sunzhenyu/Projects/MVSGaussian/lib/scene/__init__.py", line 86, in __init__ self.gaussians.create_from_pcd(scene_info.point_cloud, self.cameras_extent) File "/home/sunzhenyu/Projects/MVSGaussian/lib/scene/gaussian_model.py", line 129, in create_from_pcd fused_point_cloud = torch.tensor(np.asarray(pcd.points)).float().cuda() AttributeError: 'NoneType' object has no attribute 'points'