SysCV / LiDAR_snow_sim

LiDAR snowfall simulation
https://www.trace.ethz.ch/lidar_snow_sim
Other
167 stars 27 forks source link

openpcdet configs to reproduce result #16

Closed barzanisar closed 1 year ago

barzanisar commented 1 year ago

Hey Martin,

Can you please confirm if we can use "dense_dataset_snow_wet_coupled.yaml" to reproduce result for "Your snow+wet" and "dense_dataset_snow_uniform_gunn_1in10.yaml" to reproduce result for "Your snow"?

Does this mean you only used "lidar_hdl64_strongest" (and not hdl 64 last or vlp32 strongest/last) for training and validating all experiments? Also, the train val splits are given in LiDAR_snow_sim/splits folder but you also have FOV3000last, FOV3000strongest info pkls in your OpenPCDet/data/dense repo. While training for snow/fog sim (both papers), did you ignore frames that have less than 3000 points in camera FOV? If you did, can you please point me where in your code you do that or if 3000 points check is done before or after snow/fog sim?

Your reply will be greatly appreciated. Thanks!

MartinHahner commented 1 year ago

Hi Barza, the splits given in this repo are "FOV3000strongest", so the "bad" frames are not in these lists anymore and these splits are the ones we used.

Can you please confirm if we can use "dense_dataset_snow_wet_coupled.yaml" to reproduce result for "Your snow+wet" and "dense_dataset_snow_uniform_gunn_1in10.yaml" to reproduce result for "Your snow"?

Yes, this is correct. Note though, as stated in the paper: We report for each experiment the average performance over three independent training runs.

Does this mean you only used "lidar_hdl64_strongest" (and not hdl 64 last or vlp32 strongest/last) for training and validating all experiments?

Yes, this is correct, too. We only used "lidar_hdl64_strongest".

While training for snow/fog sim (both papers), did you ignore frames that have less than 3000 points in camera FOV? If you did, can you please point me where in your code you do that or if 3000 points check is done before or after snow/fog sim?

For the snow sim paper, we ignored all frames with less than 3000 points in camera FOV. For the fog sim paper, we didn't, because we did not find this issue at that time yet.

Once we found out that there are multiple "bad" frames (especially in the snowy data), I did this check once "offline" with a small script that unfortunately I can't find anymore, but I can explain what I did: I merged the day & night lists from here, then I iterated over every frame in these lists and checked how many points there are in the camera FOV, if there were not at least 3000 points in the camera FOV, I removed that frame from the list. The resulting lists are the ones in the splits folder in this repository. So there is no need to check anything "online" at your end anymore.

I hope this helps. Greets, Martin

MartinHahner commented 1 year ago

From wandb:

image

In the paper:

image
barzanisar commented 1 year ago

Thank you for replying. How did you get distance wise mAP? Did you make that code public?

barzanisar commented 1 year ago

I have marked in bold 2 AP results for moderate CAR. Which one did you report in your paper?

INFO Car AP@0.70, 0.70, 0.70: bbox AP:62.0437, 59.4732, 55.5099 bev AP:59.4023, 58.1002, 53.5795 3d AP:38.1604, 37.3925, 35.1047 aos AP:42.62, 39.69, 36.74 Car AP_R40@0.70, 0.70, 0.70: bbox AP:61.7183, 59.5399, 54.4184 bev AP:58.5848, 56.7695, 51.5704 3d AP:34.6149, 34.4679, 31.3272 aos AP:41.30, 38.34, 34.48 Car AP@0.70, 0.50, 0.50: bbox AP:62.0437, 59.4732, 55.5099 bev AP:74.8232, 73.8855, 67.7796 3d AP:71.9240, 70.6191, 65.6124 aos AP:42.62, 39.69, 36.74 Car AP_R40@0.70, 0.50, 0.50: bbox AP:61.7183, 59.5399, 54.4184 bev AP:75.8713, 74.3578, 69.2796 3d AP:72.3536, 71.0215, 65.5147 aos AP:41.30, 38.34, 34.48

barzanisar commented 1 year ago

I checked the frames in train_clear.txt. It contains some frames which have less than 3000 points in camera FOV (before augmenting snow). Did you only make 3000 points check on test_snow.txt?

barzanisar commented 1 year ago

Also, you included FOG_AUGMENTATION_AFTER: False in all your configs for e.g. here but in your dense_dataset.py you augment fog by just checking if 'FOG_AUGMENTATION_AFTER' is in the cfg or not and not whether it is True or False. This looks like a bug. This means if I train with dense_dataset_snow_wet_coupled.yaml, it will always augment fog after snow.

barzanisar commented 1 year ago

Also, for me it has been impossible to obtain 40+ AP with 0.7 IoU threshold in 80 epochs for moderate Car, even when I train only on clear (without any simulation) and test on clear. I wanted to know what is your batch size per GPU and how many GPUS you are using for training?

barzanisar commented 1 year ago

Shouldn't this pass self.dataset_cfg.DATA_AUGMENTOR instead of self.dataset_cfg as augmentor_configs?

barzanisar commented 1 year ago

You forward mor to data augmentor but you don't use it in data_augmentor.py. Does this need cleaning or do you actually use mor in data_augmentor but haven't pushed the latest data_augmentor.py?

MartinHahner commented 1 year ago

Thank you for replying. How did you get distance-wise mAP? Did you make that code public?

see https://github.com/SysCV/LiDAR_snow_sim/issues/11

MartinHahner commented 1 year ago

I have marked in bold 2 AP results for moderate CAR. Which one did you report in your paper?

INFO Car AP@0.70, 0.70, 0.70: bbox AP:62.0437, 59.4732, 55.5099 bev AP:59.4023, 58.1002, 53.5795 3d AP:38.1604, 37.3925, 35.1047 aos AP:42.62, 39.69, 36.74 Car AP_R40@0.70, 0.70, 0.70: bbox AP:61.7183, 59.5399, 54.4184 bev AP:58.5848, 56.7695, 51.5704 3d AP:34.6149, 34.4679, 31.3272 aos AP:41.30, 38.34, 34.48 Car AP@0.70, 0.50, 0.50: bbox AP:62.0437, 59.4732, 55.5099 bev AP:74.8232, 73.8855, 67.7796 3d AP:71.9240, 70.6191, 65.6124 aos AP:42.62, 39.69, 36.74 Car AP_R40@0.70, 0.50, 0.50: bbox AP:61.7183, 59.5399, 54.4184 bev AP:75.8713, 74.3578, 69.2796 3d AP:72.3536, 71.0215, 65.5147 aos AP:41.30, 38.34, 34.48

I don't know what the two bold numbers refer to. I did not use these confusing printout tables.

MartinHahner commented 1 year ago

I checked the frames in train_clear.txt. It contains some frames which have less than 3000 points in camera FOV (before augmenting snow). Did you only make 3000 points check on test_snow.txt?

Yes, it could be that we only filtered test_snow.txt.

MartinHahner commented 1 year ago

Also, you included FOG_AUGMENTATION_AFTER: False in all your configs for e.g. here but in your dense_dataset.py you augment fog by just checking if 'FOG_AUGMENTATION_AFTER' is in the cfg or not and not whether it is True or False. This looks like a bug. This means if I train with dense_dataset_snow_wet_coupled.yaml, it will always augment fog after snow.

I don't think it is a bug, (it's just a logic that got more and more complicated over time), because as long as the yaml contains

FOG_AUGMENTATION: False
FOG_AUGMENTATION_AFTER: False

in the code no augmentation method will be set.

augmentation_method = None

Then nothing happens inside foggify.

MartinHahner commented 1 year ago

Also, for me it has been impossible to obtain 40+ AP with 0.7 IoU threshold in 80 epochs for moderate Car, even when I train only on clear (without any simulation) and test on clear. I wanted to know what is your batch size per GPU and how many GPUS you are using for training?

We used the maximum batch size we could on four GeForce RTX 2080 Ti GPUs. For PV-RCNN e.g. the batch size was set to eight.

MartinHahner commented 1 year ago

Shouldn't this pass self.dataset_cfg.DATA_AUGMENTOR instead of self.dataset_cfg as augmentor_configs?

Yes, you should change it (back) to self.dataset_cfg.DATA_AUGMENTOR. I played around with some changes in DataAugmentor for which I needed the entire dataset config (see code snipped of my modified DataAugmentor below).

class DataAugmentor(object):
    def __init__(self, root_path, dataset_config, class_names, logger=None):
        self.root_path = root_path
        self.class_names = class_names
        self.logger = logger
        self.dataset_config = dataset_config

        augmentor_configs = self.dataset_config.DATA_AUGMENTOR

   ...
barzanisar commented 1 year ago

Also, for me it has been impossible to obtain 40+ AP with 0.7 IoU threshold in 80 epochs for moderate Car, even when I train only on clear (without any simulation) and test on clear. I wanted to know what is your batch size per GPU and how many GPUS you are using for training?

We used the maximum batch size we could on four GeForce RTX 2080 Ti GPUs. For PV-RCNN e.g. the batch size was set to eight.

Did you use batch size of 8 per GPU meaning total batch size 4 * 8 = 32 ?

MartinHahner commented 1 year ago

You forward mor to data augmentor but you don't use it in data_augmentor.py. Does this need cleaning or do you actually use mor in data_augmentor but haven't pushed the latest data_augmentor.py?

MOR is used in database_sampler.py, that is why it is passed on to data_augmentor.py.

But as I recall MOR is only relevant for the fog simulation paper. The code below ensures that in "GT oversampling" no objects further away than the MOR are sampled.

def sample_with_fixed_number(self, class_name, sample_group, mor=np.inf):
        """
        Args:
            class_name:
            sample_group:
            mor: meteological optical range in meter
        Returns:
        """
        sample_num, pointer, indices = int(sample_group['sample_num']), sample_group['pointer'], sample_group['indices']
        if pointer >= len(self.db_infos[class_name]):
            indices = np.random.permutation(len(self.db_infos[class_name]))
            pointer = 0

        limit_by_mor = self.dataset_cfg.get('LIMIT_BY_MOR', False)

        if mor < np.inf and limit_by_mor:

            trials = 0
            sampled_dict = []

            while len(sampled_dict) < sample_num:

                try:
                    idx = indices[pointer + trials]
                except IndexError:
                    break

                sample = self.db_infos[class_name][idx]
                box = sample['box3d_lidar']
                dist = np.linalg.norm(box[0:3])

                if dist < mor:
                    sampled_dict.append(self.db_infos[class_name][idx])

                trials += 1

            sample_num = trials

        else:

            sampled_dict = [self.db_infos[class_name][idx] for idx in indices[pointer: pointer + sample_num]]

        pointer += sample_num
        sample_group['pointer'] = pointer
        sample_group['indices'] = indices
        return sampled_dict

Sorry, that this code snipped was not included in the code release.

MartinHahner commented 1 year ago

Also, for me it has been impossible to obtain 40+ AP with 0.7 IoU threshold in 80 epochs for moderate Car, even when I train only on clear (without any simulation) and test on clear. I wanted to know what is your batch size per GPU and how many GPUS you are using for training?

We used the maximum batch size we could on four GeForce RTX 2080 Ti GPUs. For PV-RCNN e.g. the batch size was set to eight.

Did you use batch size of 8 per GPU meaning total batch size 4 * 8 = 32 ?

No, a total batch size of eight (so a batch size of two per GPU).

github-actions[bot] commented 1 year ago

This issue is stale because it has been open for 30 days with no activity.

github-actions[bot] commented 1 year ago

This issue was closed because it has been inactive for 14 days since being marked as stale.