Open vvvzzzOpenInterX opened 1 month ago
I also have this question. Besides, in the demo.ipynb, a clip is cut every four seconds, but in clip_train.pkl and clip_val.pkl, , a clip is cut every less than one second.
Similar question! If you try to read segmenttrain.pkl, there are 15774 dicts in the list. but if you check the number of unique f'{vid}_{start_sec}\{end_sec}' strings, it comes out to be 8911. A lot of the elements are repeated in the files!
First, I sincerely thank you and your team for your incredible work and for making this valuable resource available to the research community!
I am currently working on research based on the Ego4D videos, and am particularly interested in leveraging your dataset for my research. While analyzing the annotations from your dataset, I noticed a discrepancy. For example, in the paper, it is mentioned that 8,267 videos have been annotated. After processing the video-level annotations (provided in videos_train.json and videos_val.json), I noticed the total number of annotations contained in the 2 files does indeed add up to 8,267, as mentioned in the paper. However, it seems that these annotations correspond to only 4,492 unique videos, with several videos being annotated multiple times (i.e., the same "vid" identifier is repeated across annotations). Also, the number of unique videos annotated in segment description is also not consistent with the 8,267 mentioned in the paper.
I wanted to inquire whether the dataset indeed contains annotations for only 4,492 distinct videos, and this duplication of annotations is expected, or if there is additional data or context that I might have overlooked. Any clarification or guidance on this would be greatly appreciated!