beyretb / AnimalAI-Olympics

Code repository for the Animal AI Olympics competition
Apache License 2.0
573 stars 84 forks source link

Acquisition of 3rd person viewpoint image #53

Closed emuemuJP closed 5 years ago

emuemuJP commented 5 years ago

I just want to get first person viewpoint image and third person viewpoint image simultaneously. If i set play=True, it's hard to get many(2k~) images. If possible, could you release unity environment code except for configuration 8~10? Any other ideas?

MLanghof commented 5 years ago

For the record: The Unity environment code is not the same as the configurations. They could theoretically release the entire Unity code and still keep the configurations (the .yaml files) secret.

mdcrosby commented 5 years ago

Hello,

I can confirm the environment and the configurations are two separate things. Just to make sure there is no further confusion, the example configurations provided are NOT the same as those in the hidden tests - even for categories 1-7. These are just suggestions that might be a good starting point if you want to train an agent. We will soon consolidate the information about this into a single place as it's a bit spread out at the moment.

Regarding releasing the Unity environment, we discussed this at length within the team and are considering our options. This requires some work and we have to weigh up the benefits. If we do decide to release we cannot offer any support for anyone wanting to extend or manipulate the environment in some way and, of course, everyone will still ultimately have to work with the constraints of the tests which use first person inputs.

You are free, of course, to try any method you think is interesting for the competition itself, we just do not have the resources to help with any issues arising from modifications to the environment.

We will let you know about this further by the end of next week.

beyretb commented 5 years ago

Hello,

We have now released the source code for the environment here. You can build your own training environment from it and add observations (for example extra cameras) to your agent. It does take a bit of learning to do so, but you can find some great documentation on the ML Agents repo.

Please note that we do not support the environment repo at the moment.

emuemuJP commented 5 years ago

Great! Thank you very much!