hyf015 / egocentric-gaze-prediction

Code for the paper "Predicting Gaze in Egocentric Video by Learning Task-dependent Attention Transition"
62 stars 18 forks source link

We have to delete 10 grand truth images each video. #14

Closed kazucmpt closed 5 years ago

kazucmpt commented 5 years ago

To train SP module, we have to delete below files.

gtea_gt/Alireza_American_000000.jpg,…,Alireza_American_000010.jpg gtea_gt/Alireza_Burger_000000.jpg,…,Alireza_Burger_000010.jpg ・ ・ ・ gtea_gt/Yin_Turkey_000000.jpg,…,Yin_Turkey_000010.jpg

I recommend you to write this important information in README.md.

hyf015 commented 5 years ago

Yes, that's true. Actually, I deleted the first 100 images in my experiment.

kazucmpt commented 5 years ago

Why the first 100 images? I think the first 10 images are enough.

hyf015 commented 5 years ago

Usually, the first few frames are useless, maybe some instructions to the recorder.

mujn1461 commented 3 years ago

Hi, sorry to comment on this long-closed issue! But just to double-check that besides deleting the first 100 ground truth images, we should also delete the first 100 input images and the first 100 entries in every fixsac txt file as well? Thank you!