AIcrowd / flatland-challenge-starter-kit

⚠️ NOTICE: This starter kit was used for 2019 challenge and has been deprecated in favour of 2020 Flatland challenge's starter kit present here
https://gitlab.aicrowd.com/flatland/neurips2020-flatland-starter-kit
19 stars 7 forks source link

Submissions / leaderboard show zero fraction of done agents / mean reward for random agent #3

Open lewtun opened 5 years ago

lewtun commented 5 years ago

I made a submission with the random agent provided in the starter kit and noticed that the Fraction of done-agents and Mean Reward on the submission page and leaderboard are both shown as 0.0:

Screen Shot 2019-08-03 at 9 33 28 am

On the other hand, the evaluation logs show non-zero values for both quantities, as I would expect:

Screen Shot 2019-08-03 at 9 34 20 am

Is this behaviour by design?

MLerik commented 5 years ago

Hi @lewtun

Yes because you submitted with debugging turned on. Debugging submissions will not be ranked and thus get 0 as values for score. If you want to make a full submission you have to change the value "debug": true in the aicrowd.json to false

lewtun commented 5 years ago

Hi @MLerik, thanks for the clarification. If I want to add a small tweak to the README via a PR is it possible to contribute to this repo?

MLerik commented 5 years ago

Hi @lewtun

If you send me the changes to the README I'm happy to update it. This repo is owned by AIcrowd so I cannot manage who has access. Reach out to @spMohanty if you prefer to submit directly instead of through me

spMohanty commented 5 years ago

@lewtun : We would be every happy if you send across a PR with your changes. @MLerik : You are listed as an Admin of this repository, so you can definitely add others to the repo. But I think its best to stick to the PR based approach from all the external contributors.