[BMVC2022, IJCV2023, Best Student Paper, Spotlight] Official codes for the paper "In the Eye of Transformer: Global-Local Correlation for Egocentric Gaze Estimation".
20
stars
3
forks
source link
Assistance Requested with Discrepancies in EGTEA Dataset Testing Results #7
First of all, congratulations on your recent work on "Listen to Look into the Future: Audio-Visual Egocentric Gaze Anticipation".
I have been trying to replicate the results from your study ("In the Eye of Transformer: Global-Local Correlation for Egocentric Gaze Estimation") using the provided repository and readme instructions on the EGTEA dataset. Unfortunately, I encountered discrepancies in the metric results compared to those reported in your paper. After fixing a couple of initial errors in the setup, the script executed successfully, but the results were still inconsistent with those expected. I would appreciate it if you could guide me on where I might have made a mistake or error during testing.
When I ran the script I countered two errors:
The first error was: ModuleNotFoundError for torch._six:
Error: ModuleNotFoundError: No module named 'torch._six'
Fix: Commented out torch._six and set _int_classes = int.
The second error was: FileNotFoundError for Gaze Data Files:
Error: FileNotFoundError: [Errno 2] No such file or directory: '/path/to/gaze_data/P01-R01-PastaSalad-GazeData.txt'
Fix: Modified line 115 in egtea_gaze.py to "label_name = video_name + '.txt' #if video_name[0] == 'O' else video_name+'-GazeData.txt' " since in the gaze_data directory in the dataset (EGTEA dataset) directory there is no file that has -GazeData at the end of the file name.
After fixing these two errors the script worked but the results were different than the ones in the paper.
I have attached the logging file with the configurations and the results I obtained. Could you please guide me on where I might have gone wrong or what additional steps I should take to align my results with those reported in your paper?
Greetings,
First of all, congratulations on your recent work on "Listen to Look into the Future: Audio-Visual Egocentric Gaze Anticipation". I have been trying to replicate the results from your study ("In the Eye of Transformer: Global-Local Correlation for Egocentric Gaze Estimation") using the provided repository and readme instructions on the EGTEA dataset. Unfortunately, I encountered discrepancies in the metric results compared to those reported in your paper. After fixing a couple of initial errors in the setup, the script executed successfully, but the results were still inconsistent with those expected. I would appreciate it if you could guide me on where I might have made a mistake or error during testing.
When I ran the script I countered two errors: The first error was: ModuleNotFoundError for
torch._six
:ModuleNotFoundError: No module named 'torch._six'
torch._six
and set_int_classes = int
. The second error was: FileNotFoundError for Gaze Data Files:FileNotFoundError: [Errno 2] No such file or directory: '/path/to/gaze_data/P01-R01-PastaSalad-GazeData.txt'
egtea_gaze.py
to "label_name = video_name + '.txt' #if video_name[0] == 'O' else video_name+'-GazeData.txt' " since in the gaze_data directory in the dataset (EGTEA dataset) directory there is no file that has -GazeData at the end of the file name.After fixing these two errors the script worked but the results were different than the ones in the paper.
The Execution Command I ran: CUDA_VISIBLE_DEVICES=0 python tools/run_net.py \ --cfg /scratch/users/theed/GLC2/GLC/configs/Egtea/MVIT_B_16x4_CONV.yaml \ TRAIN.ENABLE False \ TEST.BATCH_SIZE 32 \ NUM_GPUS 1 \ OUTPUT_DIR checkpoints/GLC \ TEST.CHECKPOINT_FILE_PATH /scratch/users/theed/GLC2/GLC/MViT_Egtea_ckpt.pyth \ DATA.PATH_PREFIX /scratch/users/theed/GLC2/GLC/egtea
The EGTEA directory structure was:
I have attached the logging file with the configurations and the results I obtained. Could you please guide me on where I might have gone wrong or what additional steps I should take to align my results with those reported in your paper?
Thank you in advance for your time and assistance
stdout.log