Closed YiwuZhong closed 3 days ago
Hi Yiwu, thanks for your interests! For data, we aggregate all Ego4D-v2 downstream benchmark training/validation sets. You only need to make the dataset annotation path as datasets/ego4d/v2/annotations
, then the program will do that.
These are benchmarks (Ego4D v2.1, with goalstep) we used:
Implementation details from:
(BTW, before this work I learned your ProcedureVRL. Thank you!)
Hi Joya,
Thanks for your reply. I'll take a look into it. And good to know you virtually!
Best, Yiwu
Nice to meet you virtually! Feel free to ask me anything!
Thanks for sharing this nice work!
I was wondering if the full Ego4D-v2 dataset was used for training, or just a subset. It's not clearly demonstrated in the paper or this repo. It would be great if the authors could provide some guidelines for data downloading and usage (such as the folder structure, and which folder should your provided streaming dialogue JSON data be put to).
Best regards, Yiwu