Closed zichunxx closed 6 months ago
Update:
I found this method vec_env.render()
can get the image observation after step()
, which doesn't seem to use the relevant wrappers. Is it right?
Hello,
you are mixing gym.Env
(the wrapper you used) and VecEnv
(please have a look at our docs for the main differences).
I'm also not sure how you plan to predict on images with a model that was trained on non-image input.
Hi! @araffin Thanks for your reply.
Because the CNN
policy seems hard to converge. So, I just want to collect images that are predicted by the trained model on non-image input to carry out some training tasks with imitation learning.
vec_env.render()
seems to solve my problem without any wrapper, right?
And thanks for your kind reminder. I will check the docs for the differences between these two kinds of wrappers.
vec_env.render() seems to solve my problem without any wrapper, right?
it should, as long as you use only one env. You might need a bit more if you use multiple envs (you need to check if the images are concatenated or not).
You might need a bit more if you use multiple envs (you need to check if the images are concatenated or not).
I'll take note of what you said, thanks a lot.
❓ Question
Hi!
I have a custom-trained model with non-image observation and want to collect some image observations with the trained model.
Below is the code of my brief implementation:
But I got the TypeError like this
I've checked the documentation and issue list, but haven't found any examples of pixel wrappers for trained environments. Or maybe I missed something.
Many thanks for considering my request.
Checklist