ykotseruba / PedestrianActionBenchmark

Code and models for the WACV 2021 paper "Benchmark for evaluating pedestrian action prediction"
https://openaccess.thecvf.com/content/WACV2021/papers/Kotseruba_Benchmark_for_Evaluating_Pedestrian_Action_Prediction_WACV_2021_paper.pdf
MIT License
54 stars 17 forks source link

About PIE dataset split #8

Closed d-zh closed 3 years ago

d-zh commented 3 years ago

Hi,

In your paper, I find this sentence:

"In the PIE dataset, we follow the data split defined in [42]: videos from set01, set02 and set06 are used for training, set04 and set05 for validation and set03 for testing. The number of pedestrian tracks in PIE is 880, 243 and 719 in train, validation and test sets."

After running codes, the PIE dataset split is different from the paper. Videos from set01, set02 and set04 are used for training, set05 and set06 for validation and set03 for testing. But the number of pedestrian is consistent. Finally, I find these codes in pie_data.py.

    def _get_image_set_ids(self, image_set):
        """
        Returns default image set ids
        :param image_set: Image set split
        :return: Set ids of the image set
        """
        image_set_nums = {'train': ['set01', 'set02', 'set04'],
                          'val': ['set05', 'set06'],
                          'test': ['set03'],
                          'all': ['set01', 'set02', 'set03',
                                  'set04', 'set05', 'set06']}
        return image_set_nums[image_set]

Do I understand correctly? Could you tell me which PIE dataset split is correct?

Thanks a lot!

ykotseruba commented 3 years ago

It is a typo. We are using pie_data.py to generate the samples for the benchmark, so set06 is used for validation.

d-zh commented 3 years ago

Thank you very much!