Open dongxinfeng1 opened 2 weeks ago
Dear authors, I submit the output .json results file to the EvalAI leaderboard and it encounter some problems. Do you know the reason for this error ? The following is the problem:
Traceback (most recent call last): File "/code/scripts/workers/submission_worker.py", line 538, in run_submission submission_metadata=submission_serializer.data, File "/tmp/tmp3lqcmc5z/compute/challenge_data/challenge_97/main.py", line 152, in evaluate print(ev.score(user_annotation_file)) File "/tmp/tmp3lqcmc5z/compute/challenge_data/challenge_97/main.py", line 109, in score self._score_item(item['instr_id'], item['trajectory']) File "/tmp/tmp3lqcmc5z/compute/challenge_data/challenge_97/main.py", line 89, in _score_item self.graphs[gt['scan']][prev[0]][curr[0]] File "/usr/local/lib/python3.7/site-packages/networkx/classes/coreviews.py", line 51, in getitem return self._atlas[key] KeyError: '9fd81fb06dd843efa3f73fb929db4841'
I used the checkpoint you provided and find in test dataset, the trajectory length is 1 but the function " _get_gt_trajs" require len(x['path']) > 1. It can be used for test. Can you give me some advice?
def _get_gt_trajs(self, data): gt_trajs = { x['instr_id']: (x['scan'], x['path']) \ for x in data if len(x['path']) > 1 } return gt_trajs