SarahwXU / HiSup

MIT License
128 stars 18 forks source link

Evaluation on CrowdAI test split. #8

Closed yeshwanth95 closed 1 year ago

yeshwanth95 commented 1 year ago

Hello @SarahwXU ,

Thank you for releasing your great work. Presently I'm working on comparing some of my methods with yours and I'm curious how you managed to get the evalutation on the CrowdAI test split. Could you kindly direct me to any evaluation servers accepting submissions for the crowdai test split?

SarahwXU commented 1 year ago

If only I could. As far as I know, the mapping challenge of CrowdAI was closed and renamed to AICrowd. I cannot find the submission entry either. We used the validation set (val.tar.gz) as testing in our paper. You could split the original training set (train.tar.gz) into your training subset and validation subset during model training.

yeshwanth95 commented 1 year ago

Hi. Thanks for the suggestion. You're right. The AICrowd dataset doesn't offer any way to evaluate on the test split as of now. I was confused since the Polyworld paper reports additional metrics claiming to be on the test split. Since you also made comparisons with their method, I was wondering if there was some way to get the test split evals.

image

But I'm thinking this could have also been just a typo. I'll close the issue and confirm this with the polyworld authors.

Thank you.