LightwheelAI / street-gaussians-ns

Unofficial implementation of "Street Gaussians for Modeling Dynamic Urban Scenes"
Apache License 2.0
199 stars 18 forks source link

No module named 'street_gaussians_ns.cameras' #1

Open sonnefred opened 1 month ago

sonnefred commented 1 month ago

Hi, thanks for reproducing the paper and sharing the work. I want to have a try on Waymo dataset, but I got the following error. Is there anything I missed during installing the environment? Thanks. jietu-1716451366107

LightwheelAI commented 1 month ago

Thanks for the issue, and I'm very sorry, the bug came from our process of cleaning up the code. I've updated the code to solve this issue, you can try the latest code!

sonnefred commented 1 month ago

Thanks for the issue, and I'm very sorry, the bug came from our process of cleaning up the code. I've updated the code to solve this issue, you can try the latest code!

Thanks for the reply, and the problem has been solved, but I met this error, I'm not sure what caused this. Thanks. jietu-1716466063698

zwlvd commented 1 month ago

I meet the same error about the MaskFormer in 'META_ARCH' registry.

merriaux commented 1 month ago

Hi @zwlvd and @sonnefred I think I find this one, segs_generate.sh: model path change dependency to dependencies:

python dependencies/Mask2Former/segs_generate.py \
    --root_path $root \
    --config-file dependencies/Mask2Former/configs/mapillary-vistas/semantic-segmentation/swin/maskformer2_swin_large_IN21k_384_bs16_300k.yaml \
    --opts MODEL.WEIGHTS dependencies/Mask2Former/models/model_final_90ee2d.pkl

and in segs_generate.py we will need to include : from mask2former.maskformer_model import MaskFormer

best

zwlvd commented 1 month ago

Thank you for your advice @merriaux I follow your way to change the code , while I still meet the same error. Do you succeed to run the segs_generate.sh?

merriaux commented 1 month ago

hi @zwlvd , Yes it works on my side

(street-gaussians-ns) pierre.merriaux@spcayqbgpu01:/data/workspace/pierre.merriaux/street-gaussians-ns(main)$ bash ./scripts/shells/segs_generate.sh /data/nerf/input-nerf/streetgaussian/training/10448102132863604198_472_000_492_000/ 
[05/23 15:49:53 detectron2]: Arguments: Namespace(base_dir='#####/Nuscenes/sweeps/', confidence_threshold=0.5, config_file='dependencies/Mask2Former/configs/mapillary-vistas/semantic-segmentation/swin/maskformer2_swin_large_IN21k_384_bs16_300k.yaml', opts=['MODEL.WEIGHTS', 'dependencies/Mask2Former/models/model_final_90ee2d.pkl'], output=None, root_path='/data/nerf/input-nerf/streetgaussian/training/10448102132863604198_472_000_492_000/', save_dir='#####/KittiOdom/sequences', video_input=None, webcam=False)
WARNING [05/23 15:49:53 fvcore.common.config]: Loading config dependencies/Mask2Former/configs/mapillary-vistas/semantic-segmentation/swin/../Base-MapillaryVistas-SemanticSegmentation.yaml with yaml.unsafe_load. Your machine may be at risk if the file contains malicious content.
/data/anaconda3/envs/street-gaussians-ns/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3526.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
[05/23 15:49:56 d2.checkpoint.detection_checkpoint]: [DetectionCheckpointer] Loading from dependencies/Mask2Former/models/model_final_90ee2d.pkl ...
[05/23 15:49:56 fvcore.common.checkpoint]: [Checkpointer] Loading from dependencies/Mask2Former/models/model_final_90ee2d.pkl ...
[05/23 15:49:56 fvcore.common.checkpoint]: Reading a file from 'MaskFormer Model Zoo'
Weight format of MultiScaleMaskedTransformerDecoder have changed! Please upgrade your models. Applying automatic conversion now ...
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 915/915 [21:04<00:00,  1.38s/it]
merriaux commented 1 month ago

I also moved segs_generate.py in mask2former folder, don't if it could help

zwlvd commented 1 month ago

@merriaux Thank you very much. It is useful.

merriaux commented 1 month ago

I reach to start a training but it crashed at iteration 3990: image

Exception in thread Thread-17:                                                                                                                                                                       
Traceback (most recent call last):                                                                                                                                                                   
  File "/data/anaconda3/envs/street-gaussians-ns/lib/python3.8/threading.py", line 932, in _bootstrap_inner                                                                                          
    self.run()                                                                                                                                                                                       
  File "/data/anaconda3/envs/street-gaussians-ns/lib/python3.8/site-packages/nerfstudio/viewer/render_state_machine.py", line 224, in run                                                            
    outputs = self._render_img(action.camera_state)                                                                                                                                                  
  File "/data/anaconda3/envs/street-gaussians-ns/lib/python3.8/site-packages/nerfstudio/viewer/render_state_machine.py", line 168, in _render_img                                                    
    outputs = self.viewer.get_model().get_outputs_for_camera(camera, obb_box=obb)                                                                                                                    
  File "/data/anaconda3/envs/street-gaussians-ns/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context                                                              
    return func(*args, **kwargs)                                                                                                                                                                     
  File "/data/workspace/pierre.merriaux/street-gaussians-ns/street_gaussians_ns/sgn_splatfacto.py", line 1106, in get_outputs_for_camera                                                             
    outs = self.get_outputs(camera.to(self.device))                                               
  File "/data/workspace/pierre.merriaux/street-gaussians-ns/street_gaussians_ns/sgn_splatfacto_scene_graph.py", line 363, in get_outputs                                                             
    out = super().get_outputs(camera)                                                             
  File "/data/workspace/pierre.merriaux/street-gaussians-ns/street_gaussians_ns/sgn_splatfacto.py", line 858, in get_outputs                                                                         
    colors_crop = torch.cat((features_dc_crop, features_rest_crop), dim=1)                        
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 298587 but got size 437860 for tensor number 1 in the list.

In dataset preparation, there few paths are wrong. My repo diff if it could help:

(street-gaussians-ns) pierre.merriaux@spcayqbgpu01:/data/workspace/pierre.merriaux/street-gaussians-ns(main)$ git diff
diff --git a/scripts/shells/data_process.sh b/scripts/shells/data_process.sh
index 7c5d810..bae740c 100644
--- a/scripts/shells/data_process.sh
+++ b/scripts/shells/data_process.sh
@@ -8,4 +8,4 @@ sh scripts/shells/run_colmap.sh $root

 sh scripts/shells/points_cloud_generate.sh $root

-sh scripts/shells/extract_object_pts.py $root
\ No newline at end of file
+sh scripts/shells/object_pts_generate.sh $root
\ No newline at end of file
diff --git a/scripts/shells/points_cloud_generate.sh b/scripts/shells/points_cloud_generate.sh
index a34a3ca..0e03f6d 100644
:...skipping...
diff --git a/scripts/shells/data_process.sh b/scripts/shells/data_process.sh
index 7c5d810..bae740c 100644
--- a/scripts/shells/data_process.sh
+++ b/scripts/shells/data_process.sh
@@ -8,4 +8,4 @@ sh scripts/shells/run_colmap.sh $root

 sh scripts/shells/points_cloud_generate.sh $root

-sh scripts/shells/extract_object_pts.py $root
\ No newline at end of file
+sh scripts/shells/object_pts_generate.sh $root
\ No newline at end of file
diff --git a/scripts/shells/points_cloud_generate.sh b/scripts/shells/points_cloud_generate.sh
index a34a3ca..0e03f6d 100644
--- a/scripts/shells/points_cloud_generate.sh
+++ b/scripts/shells/points_cloud_generate.sh
@@ -2,10 +2,10 @@ root=$1

 python scripts/pythons/pcd2colmap_points3D.py \
     --root_path $root \
-    --output_path $root/colmap/lidar/points3D.txt \
+#    --output_path $root/colmap/lidar/points3D.txt \
     --main_lidar_in_transforms lidar_FRONT \

 python scripts/pythons/colmap_pts_combine.py \
-    --src1 $root/colmap/lidar/points3D.txt \
+    --src1 $root/colmap/sparse/lidar/points3D.txt \
     --src2 $root/colmap/sparse/origin/points3D.txt \
     --dst $root/colmap/points3D.txt
\ No newline at end of file
diff --git a/scripts/shells/run_colmap.sh b/scripts/shells/run_colmap.sh
index ce67f89..92cde33 100644
--- a/scripts/shells/run_colmap.sh
+++ b/scripts/shells/run_colmap.sh
@@ -33,7 +33,7 @@ colmap mapper \
     --Mapper.filter_min_tri_angle 0.1 \

 colmap model_comparer \
-    --input_path1 $DATASET_PATH/colmap/sparse/not_align \
+    --input_path1 $DATASET_PATH/colmap/sparse/not_align/3 \
     --input_path2 $DATASET_PATH/colmap/sparse/origin \
     --output_path $DATASET_PATH/colmap/sparse/0 \
     --alignment_error proj_center
@@ -44,4 +44,4 @@ colmap point_triangulator \
     --database_path $DATASET_PATH/colmap/database.db \
     --image_path $DATASET_PATH/images \
     --input_path $DATASET_PATH/colmap/sparse/origin \
-    --output_path $DATASET_PATH/colmap/sparse/origin \
\ No newline at end of file
+    --output_path $DATASET_PATH/colmap/sparse/0 \
\ No newline at end of file
diff --git a/scripts/shells/segs_generate.sh b/scripts/shells/segs_generate.sh
index e36a5a3..6b22c2f 100644
--- a/scripts/shells/segs_generate.sh
+++ b/scripts/shells/segs_generate.sh
@@ -3,4 +3,4 @@ root=$1
 python dependencies/Mask2Former/segs_generate.py \
     --root_path $root \
     --config-file dependencies/Mask2Former/configs/mapillary-vistas/semantic-segmentation/swin/maskformer2_swin_large_IN21
k_384_bs16_300k.yaml \
-    --opts MODEL.WEIGHTS dependency/Mask2Former/models/model_final_90ee2d.pkl
\ No newline at end of file
+    --opts MODEL.WEIGHTS dependencies/Mask2Former/models/model_final_90ee2d.pkl
\ No newline at end of file

And I am not sure of my colmap processing: I need to change--input_path1 of model_comparer step, and --output_path of point_triangulator step I needed also to create a folder lidar in colmap/sparse.

thanks

LightwheelAI commented 1 month ago

@merriaux Thank you very much for catching the problem in our script! I made some mistakes in the points_cloud_generate.sh and run_colmap.sh. In scripts/pythons/pcd2colmap_points3D.py, output path defaults to colmap/sparse/lidar/points3D.txt, so we need create a folder colmap/sparse/lidar and use it as the --src1 of scripts/pythons/colmap_pts_combine.py. About the run_colmap.sh, --input_path1 of model_comparer should be $DATASET_PATH/colmap/sparse/not_align/0 corresponding to the --output_path of mapper. The point_triangulator step is not necessary if mapper successed. --output_path of point_triangulator will create a SfM points file and may cover the result of mapper.

I fixed the bugs in the bash file and updated it to the repository based on your feedback!

LightwheelAI commented 1 month ago

@merriaux Regarding the problem you had during your training, I'm sorry that I can't reproduce the problem. Would you be so kind to make more attempts to get more information?

37D5FE322DA49D6EE7B394D2DCB71081

merriaux commented 1 month ago

thanks for all @LightwheelAI, Let me first retry with the right data processing pipeline, and see if the training will be better

merriaux commented 1 month ago

hi @LightwheelAI, There is something I don't catch in data preparation:

colmap model_comparer \
    --input_path1 $DATASET_PATH/colmap/sparse/not_align/0 \
    --input_path2 $DATASET_PATH/colmap/sparse/origin \
    --output_path $DATASET_PATH/colmap/sparse/0 \
    --alignment_error proj_center

I was not enable to find document about model_comparer (I looked directly in the colmap code), but at the end it just compares 2 models, and give us two text files (errors_summary.txt, and errors.csv). Next step of data processing crashes because we are looking for a colmap model in sparse/0. Do you forget to call model_aligner ?. Because from my understanding of data processing, we want to "merge" colmap mapper point cloud in waymo point cloud referential ? or is it waymo point cloud in colmap mapper point cloud referential ?

If I am right, what the data processing is supposed to do:

Could you please help to understand ? thanks

blackmrb commented 1 month ago

100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 995/995 [00:50<00:00, 19.82it/s] Traceback (most recent call last): File "scripts/pythons/colmap_pts_combine.py", line 27, in colmap_points2 = read(args.src2) File "scripts/pythons/colmap_pts_combine.py", line 17, in read return colmap_utils.read_points3D_binary(path) File "/mnt/data/code/nerfstudio/nerfstudio/data/utils/colmap_parsing_utils.py", line 344, in read_points3D_binary with open(path_to_model_file, "rb") as fid: FileNotFoundError: [Errno 2] No such file or directory: '/root/code/street-gaussians-ns/data/waymo/processed/training/1191788760630624072_3880_000_3900_000/colmap/sparse/0/points3D.bin'

I have the same problem.

hi @LightwheelAI, There is something I don't catch in data preparation:

colmap model_comparer \
    --input_path1 $DATASET_PATH/colmap/sparse/not_align/0 \
    --input_path2 $DATASET_PATH/colmap/sparse/origin \
    --output_path $DATASET_PATH/colmap/sparse/0 \
    --alignment_error proj_center

I was not enable to find document about model_comparer (I looked directly in the colmap code), but at the end it just compares 2 models, and give us two text files (errors_summary.txt, and errors.csv). Next step of data processing crashes because we are looking for a colmap model in sparse/0. Do you forget to call model_aligner ?. Because from my understanding of data processing, we want to "merge" colmap mapper point cloud in waymo point cloud referential ? or is it waymo point cloud in colmap mapper point cloud referential ?

If I am right, what the data processing is supposed to do:

  • camera pose and intrinsic parameter from waymo
  • background point cloud: waymo front lidar (colorized from camera, and remove point in objects box) + colmap mapper point cloud (but need alignment because mapper)
  • objects: Lidar TOP point in objects boxes, colorized by cameras

Could you please help to understand ? thanks

LightwheelAI commented 1 month ago

@merriaux Thanks for trying, your understanding is spot on! I'm very sorry, I neglected to mention that we made additional changes to colmap to make it output points3D.bin. There are fewer parts that need to be modified, you can refer to the way we modified it as follow and then recompile your colmap. best 20240527-110159

merriaux commented 1 month ago

Hi @LightwheelAI, Thanks for your message.

Rotation errors (degrees) Min: 43.9831 Max: 95.4597 Mean: 54.4792 Median: 48.8893 P90: 85.0368 P99: 93.6138

Projection center errors Min: 0.00893002 Max: 83.8678 Mean: 8.66299 Median: 1.58935 P90: 31.7972 P99: 74.711


I quite surprise of the matrix found: 

-7.33663 1.63006 -10.2083 -163.678 6.14792 -9.37555 -5.91557 244.389 -8.31077 -8.37459 4.63565 4.07124


do you think it make sens ?
thanks
merriaux commented 1 month ago

It looks like there is a typo in scripts/shells/data_process.sh -sh scripts/shells/extract_object_pts.py $root need to be replaced by +sh scripts/shells/object_pts_generate.sh $root

I have an issue now with the dataparser, camera_id goes to 995 different cameras in place of 5. cam_id_to_camera = colmap_utils.read_cameras_binary(recon_dir / "cameras.bin") return 995 entries, so after datamanager fail. It looks like the mapper don't keep camera id one per image folder.

LightwheelAI commented 1 month ago

Thanks for the heads up, we've updated the data_process.sh file.

Since we use waymo's camera poses to initialize the SfM point cloud calculation, and this part of the code is not open source yet, so if you get big errors, please try to use only the raw camera poses and point cloud from waymo data.

In our dataparser, we load all 995 images and use --filter_camera_id to select specified cameras. It could be a problem in the data processing process.

best

merriaux commented 1 month ago

Hi @LightwheelAI, thanks for your answer. At the end in the in the training process, do you use poses and intrinsics from waymo dataset, or an optimized poses and intrinsics from sfm/colmap optimization preprocess ?

LightwheelAI commented 1 month ago

Thanks for your interest in our work! It depends, we use the optimized parameters when colmap runs successfully, otherwise we use waymo's original parameters:)

merriaux commented 1 month ago

thanks