Open ghostFaceKillah opened 5 years ago
Sorry for such a late reply. This is a very good catch and I also got a message like that earlier.
I do not know if there is an automatic way of fixing that, unfortunately. But currently I don't have enough bandwidth to explore it deeply enough =(
@ghostFaceKillah How do you solve this problem? Is the subtle color difference really important in behavior cloning? What if I am using the AGC data for imitation learning (using GAIL)? Will the color difference cause trouble?
I have solved the problem by implementing my own data gatherer https://github.com/ghostFaceKillah/play-monte. Feel free to use data I have gathered, linked in this project https://github.com/ghostfacekillah/expert.
I have some evidence that it could potentially cause problems. 1) In the companion paper to this repo,https://arxiv.org/abs/1705.10998, the authors report weaker results of behavioural cloning on Montezuma's Revenge as compared to other papers e.g. https://arxiv.org/abs/1704.03732. That might be the reason. 2) In my own experiments for this work: https://arxiv.org/pdf/1809.03447.pdf this detail has caused performance issue. The extent to which it is a problem depends on the details of preprocessing.
First of all thank you for releasing this data and accompanying paper. I think it is a very useful thing for the community. I release all my data as well (e.g. see https://github.com/ghostFaceKillah/expert).
I have noticed a colour difference between images in AGC and ones coming from Atari OpenAI Gym. Please see here for writeup https://github.com/ghostFaceKillah/agc-imgs.
Perhaps this partially explains the weak performance of behavioural cloning in the paper accompanying this release?