OPEN-AIR-SUN / mars

MARS: An Instance-aware, Modular and Realistic Simulator for Autonomous Driving
Apache License 2.0
681 stars 64 forks source link

objects can not be rendered on the background #62

Closed wutongtong closed 1 year ago

wutongtong commented 1 year ago

Hi, I successfully generated objects and background on road dataset, but the objects can not be rendered on the background. I seperately generated "objects_rgb" and "background" as below, can you give me some advices?

截屏2023-08-30 15 38 25 截屏2023-08-30 15 38 55

Carl-Carl commented 1 year ago

Thank you for your feedback. Static objects in the scene are always learned by the background NeRF instead of car NeRF, because the model cannot distinguish static objects from the background if they never move in the whole image sequence, and treating them as 2D "images" is the "easiest" way for the model to converge.

However, I think there might be something wrong with the poses in your experiment, according to the bizarre shapes and positions of objects in the second picture. Perhaps you could start by checking the pose of bounding boxes and cameras.

Carl-Carl commented 1 year ago

The background looks quite normal. The model has separated all moving objects from the background successfully. Maybe the poses of bounding boxes cause the noise.

Carl-Carl commented 1 year ago

The GT image is of great help. Since all rendered cars in the second image are in black color, maybe the latent code of cars gets mixed. Please switch the _objectrepresentation from "class-wise" to "object-wise" to disentangle all the latent code of objects. Besides, I think the positions of objects are correct because all moving cars are separated from the background. However, the rotations of objects are weird. Therefore, I hypothesize that the issue is caused by the yaw angle of input bounding boxes. I propose considering taking negative values for the yaw angle of the objects.

image

jelleopard commented 1 year ago

@Carl-Carl hello, This error occurred when I used the configuration file as below, can you give me some help. When I use NerfactoModelConfig as background-model, the background rgb image is empty. What is the reason for this? thanks

KITTI_Recon_NSG_Car_Depth = MethodSpecification(
    config=TrainerConfig(
        method_name="nsg-kitti-car-depth-recon",
        steps_per_eval_image=STEPS_PER_EVAL_IMAGE,
        steps_per_eval_all_images=STEPS_PER_EVAL_ALL_IMAGES,
        steps_per_save=STEPS_PER_SAVE,
        max_num_iterations=MAX_NUM_ITERATIONS,
        save_only_latest_checkpoint=False,
        mixed_precision=False,
        use_grad_scaler=True,
        log_gradients=True,
        pipeline=NSGPipelineConfig(
            datamanager=NSGkittiDataManagerConfig(
                dataparser=NSGkittiDataParserConfig(
                    use_car_latents=True,
                    use_depth=True,
                    # use_semantic=False,
                    use_semantic=True,
                    semantic_mask_classes=['Van', 'Undefined'],
                    semantic_path=Path("/home/mars/data/kitti/panoptic_maps"),
                    split_setting="reconstruction",
                    car_object_latents_path=Path(
                        "/home/mars/latents/KITTI-MOT/car-object-latents/latent_codes_car_van_truck.pt"
                    ),
                    car_nerf_state_dict_path=Path("/home/mars/latents/KITTI-MOT/car-nerf-state-dict/epoch_670.ckpt"),
                ),
                train_num_rays_per_batch=4096,
                eval_num_rays_per_batch=4096,
                camera_optimizer=CameraOptimizerConfig(mode="off"),
            ),
            model=SceneGraphModelConfig(
                background_model=SemanticNerfWModelConfig(
                    num_proposal_iterations=1,
                    num_proposal_samples_per_ray=[48],
                    num_nerf_samples_per_ray=97,
                    use_single_jitter=False,
                    semantic_loss_weight=0.1
                ),
                # background_model=NerfactoModelConfig(),
                mono_depth_loss_mult=0.05,
                depth_loss_mult=0,
                use_sky_model=True,
                object_model_template=CarNeRFModelConfig(_target=CarNeRF),
                # object_representation="class-wise",
                object_representation="object-wise",
                object_ray_sample_strategy="remove-bg",
            ),
        ),
        optimizers={
            "background_model": {
                "optimizer": RAdamOptimizerConfig(lr=1e-3, eps=1e-15),
                "scheduler": ExponentialDecaySchedulerConfig(lr_final=1e-5, max_steps=200000),
            },
            "learnable_global": {
                "optimizer": RAdamOptimizerConfig(lr=1e-3, eps=1e-15),
                "scheduler": ExponentialDecaySchedulerConfig(lr_final=1e-5, max_steps=200000),
            },
            "object_model": {
                "optimizer": RAdamOptimizerConfig(lr=5e-3, eps=1e-15),
                "scheduler": ExponentialDecaySchedulerConfig(lr_final=1e-5, max_steps=200000),
            },
            "sky_model": {
                "optimizer": RAdamOptimizerConfig(lr=5e-3, eps=1e-15),
                "scheduler": ExponentialDecaySchedulerConfig(lr_final=1e-5, max_steps=200000),
            },
        },
        # viewer=ViewerConfig(num_rays_per_chunk=1 << 15),
        # vis="wandb",
        vis="tensorboard",
    ),
    description="Neural Scene Graph implementation with vanilla-NeRF model for backgruond and object models.",
)
Traceback (most recent call last):
  File "/home/anaconda3/envs/SUDS/bin/ns-train", line 8, in <module>
    sys.exit(entrypoint())
  File "/home/mars/nerfstudio/nerfstudio/scripts/train.py", line 262, in entrypoint
    main(
  File "/home/mars/nerfstudio/nerfstudio/scripts/train.py", line 248, in main
    launch(
  File "/home/mars/nerfstudio/nerfstudio/scripts/train.py", line 187, in launch
    main_func(local_rank=0, world_size=world_size, config=config)
  File "/home/mars/nerfstudio/nerfstudio/scripts/train.py", line 101, in train_loop
    trainer.setup()
  File "/home/mars/nerfstudio/nerfstudio/engine/trainer.py", line 151, in setup
    self.pipeline = self.config.pipeline.setup(
  File "/home/mars/nerfstudio/nerfstudio/configs/base_config.py", line 58, in setup
    return self._target(self, **kwargs)
  File "/home/mars/nsg/nsg_pipeline.py", line 94, in __init__
    self._model = config.model.setup(
  File "/home/mars/nerfstudio/nerfstudio/configs/base_config.py", line 58, in setup
    return self._target(self, **kwargs)
  File "/home/mars/nerfstudio/nerfstudio/models/base_model.py", line 82, in __init__
    self.populate_modules()  # populate the modules
  File "/home/mars/nsg/models/scene_graph.py", line 237, in populate_modules
    self.cross_entropy_loss = torch.nn.CrossEntropyLoss(reduction="mean", ignore_index=self.semantic_num)
  File "/home/anaconda3/envs/SUDS/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1269, in __getattr__
    raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'SceneGraphModel' object has no attribute 'semantic_num'
AmazingRoad commented 1 year ago

@Carl-Carl hello, This error occurred when I used the configuration file as below, can you give me some help. When I use NerfactoModelConfig as background-model, the background rgb image is empty. What is the reason for this? thanks KITTI_Recon_NSG_Car_Depth = MethodSpecification( config=TrainerConfig( method_name="nsg-kitti-car-depth-recon", steps_per_eval_image=STEPS_PER_EVAL_IMAGE, steps_per_eval_all_images=STEPS_PER_EVAL_ALL_IMAGES, steps_per_save=STEPS_PER_SAVE, max_num_iterations=MAX_NUM_ITERATIONS, save_only_latest_checkpoint=False, mixed_precision=False, use_grad_scaler=True, log_gradients=True, pipeline=NSGPipelineConfig( datamanager=NSGkittiDataManagerConfig( dataparser=NSGkittiDataParserConfig( use_car_latents=True, use_depth=True, # use_semantic=False, use_semantic=True, semantic_mask_classes=['Van', 'Undefined'], semantic_path=Path("/home/mars/data/kitti/panoptic_maps"), split_setting="reconstruction", car_object_latents_path=Path( "/home/mars/latents/KITTI-MOT/car-object-latents/latent_codes_car_van_truck.pt" ), car_nerf_state_dict_path=Path("/home/mars/latents/KITTI-MOT/car-nerf-state-dict/epoch_670.ckpt"), ), train_num_rays_per_batch=4096, eval_num_rays_per_batch=4096, camera_optimizer=CameraOptimizerConfig(mode="off"), ), model=SceneGraphModelConfig( background_model=SemanticNerfWModelConfig( num_proposal_iterations=1, num_proposal_samples_per_ray=[48], num_nerf_samples_per_ray=97, use_single_jitter=False, semantic_loss_weight=0.1 ), # background_model=NerfactoModelConfig(), mono_depth_loss_mult=0.05, depth_loss_mult=0, use_sky_model=True, object_model_template=CarNeRFModelConfig(_target=CarNeRF), # object_representation="class-wise", object_representation="object-wise", object_ray_sample_strategy="remove-bg", ), ), optimizers={ "background_model": { "optimizer": RAdamOptimizerConfig(lr=1e-3, eps=1e-15), "scheduler": ExponentialDecaySchedulerConfig(lr_final=1e-5, max_steps=200000), }, "learnable_global": { "optimizer": RAdamOptimizerConfig(lr=1e-3, eps=1e-15), "scheduler": ExponentialDecaySchedulerConfig(lr_final=1e-5, max_steps=200000), }, "object_model": { "optimizer": RAdamOptimizerConfig(lr=5e-3, eps=1e-15), "scheduler": ExponentialDecaySchedulerConfig(lr_final=1e-5, max_steps=200000), }, "sky_model": { "optimizer": RAdamOptimizerConfig(lr=5e-3, eps=1e-15), "scheduler": ExponentialDecaySchedulerConfig(lr_final=1e-5, max_steps=200000), }, }, # viewer=ViewerConfig(num_rays_per_chunk=1 << 15), # vis="wandb", vis="tensorboard", ), description="Neural Scene Graph implementation with vanilla-NeRF model for backgruond and object models.", )

Traceback (most recent call last): File "/home/anaconda3/envs/SUDS/bin/ns-train", line 8, in sys.exit(entrypoint()) File "/home/mars/nerfstudio/nerfstudio/scripts/train.py", line 262, in entrypoint main( File "/home/mars/nerfstudio/nerfstudio/scripts/train.py", line 248, in main launch( File "/home/mars/nerfstudio/nerfstudio/scripts/train.py", line 187, in launch main_func(local_rank=0, world_size=world_size, config=config) File "/home/mars/nerfstudio/nerfstudio/scripts/train.py", line 101, in train_loop trainer.setup() File "/home/mars/nerfstudio/nerfstudio/engine/trainer.py", line 151, in setup self.pipeline = self.config.pipeline.setup( File "/home/mars/nerfstudio/nerfstudio/configs/base_config.py", line 58, in setup return self._target(self, kwargs) File "/home/mars/nsg/nsg_pipeline.py", line 94, in init self._model = config.model.setup( File "/home/mars/nerfstudio/nerfstudio/configs/base_config.py", line 58, in setup return self._target(self, kwargs) File "/home/mars/nerfstudio/nerfstudio/models/base_model.py", line 82, in init self.populate_modules() # populate the modules File "/home/mars/nsg/models/scene_graph.py", line 237, in populate_modules self.cross_entropy_loss = torch.nn.CrossEntropyLoss(reduction="mean", ignore_index=self.semantic_num) File "/home/anaconda3/envs/SUDS/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1269, in getattr raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'SceneGraphModel' object has no attribute 'semantic_num'

@jelleopard you can add this in scene_graph.py. image

wutongtong commented 1 year ago

king negative values for the yaw angle of the objects. @Carl-Carl I changed negative value of the yaw angle,the object yaw angle now is right。but looking at the objects from the camera is upward view,how to convert it to overlook view,can you give me some advice? 截屏2023-08-31 13 12 59

wutongtong commented 1 year ago

hi, @Carl-Carl , I also trained the kitti dataset, the "background", "objects_rgb", and "rgb" are as below shows, no cars rendered on the "rgb" image. 截屏2023-08-31 15 38 13 background 截屏2023-08-31 15 38 26 rgb 截屏2023-08-31 15 38 43 objects_rgb

Carl-Carl commented 1 year ago

The experiment result on the kitti dataset is quite abnormal. I am sure that our code posted on Github works well on this dataset, so I wonder if you have ever modified any part of the code or configuration for your experiment. Btw, which configuration in the cicai_configs.py are you using now?

wutongtong commented 1 year ago

this is my cicai_configs.py

截屏2023-08-31 16 23 51

Carl-Carl commented 1 year ago

It seems that some settings in the config have been changed. Would you mind testing our full model "KITTI_Recon_NSG_Car_Depth" with our initial code and configurations? Given that our original code runs properly, it it likely that the issue stems from some kinds of modifications to the original code.

wutongtong commented 1 year ago

thank you for your replay, I have solve the problem!

AmazingRoad commented 1 year ago

It seems like that you should select a right 'scale_factor' in config. And when I use 'scale_factor=0.05'(because of the max pose is 24), it begins to show some cars. @Carl-Carl @wutongtong image