Closed slerman12 closed 5 years ago
The interface is pretty close to gym, you can adapt it with something like https://gist.github.com/christopherhesse/dc5a7ed99704870592d7c3264e0dbd6c
Thank you! What about installation? Gym usually just involves a single import statement and one or two pip3 calls.
Would I just do the specified installation procedure and use import coinrun
at the header of my code? And then env = scalar_adapter(coinrun)
using the adapter code you linked to retrieve the environment?
What about selecting training vs. testing? Sorry if this is asking too much!
Hmm, not sure about installation besides the normal coinrun install instructions.
As for your second question, that wrapper actually operates on the class, not the instance, try this one instead: https://gist.github.com/christopherhesse/5e499b602ca9f2055d13bad554524237
Training vs testing is up to you, but be warned it might not be threadsafe: https://github.com/openai/coinrun/issues/19
Can I also ask what the code would look like? For example:
import coinrun
env = Scalarize(coinrun)
As for dividing up onto training or testing, is there something along the lines of:
training_env = Scalarize_For_Training(coinrun)
testing_env = Scalarize_For_Testing(coinrun)
More like https://github.com/openai/coinrun/blob/master/coinrun/random_agent.py#L7 except put Scalarize() around the call to make(). That code isn't well tested though, sorry.
Ah, okay, thank you. And the argument to make() determines which environment (training vs. testing) is retrieved? That's the last thing I'd need in order to evaluate my agent on this set of environments.
It's up to you to create train/test envs, you have to set https://github.com/openai/coinrun/blob/601de520abec526c101eb87cb445c612a9087407/coinrun/coinrunenv.py#L109 Config.NUM_LEVELS, see https://github.com/openai/coinrun/blob/601de520abec526c101eb87cb445c612a9087407/coinrun/train_agent.py for a full example and reference the example commands from the docs: https://github.com/openai/coinrun#try-it-out
in Scalarize we have num_envs==1 . we cant support more then 1 level? how to check done condition as there could be more cases when agent did nothing and next_state=previous_state
Well if you want multiple envs, just don't use scalarize. Not sure about your second question.
then it will be vector and I can't apply DQN easily, right? (btw I am using pytorch not that it matters) anyway, how can I change that one level which is getting generated in the scalarize? it's always the same layout and its too hard for that agent? also if there is a way of changing the layout of the one level then i can even train on multiple levels one by one.
Well I mean do you want a single environment or multiple ones? If you want a single one, you should do num_envs=1, otherwise you can use a larger number.
It sounds like you want multiple random levels which should be the default. Can you post a short script that shows the coinrun environment not changing the level after reset?
Also file a new issue for that please.
ohh i understand. sorry about not creating new issue :(
I have an agent that I'd like to test on this environment by running train and test sessions to determine its generalizability. I was hoping there would be a Gym-like interface that would let me do this, but I am confused by the interface that is available. Could a simple example be provided for how I could achieve this with my agent? I'm very familiar with running standard Gym environments, so that is my main frame of reference for understanding things like this. I apologize if the documentation is clear about how to do this and it just went over my head. Thanks so much for releasing this great resource!