Closed CarlosGual closed 1 year ago
This doesn't work because the numpy arrays don't support deepcopy, but I think that a custom deepcopy implementation is possible. I would be open to accepting a pull request if someone is up for implementing it.
Hi, thank you for your comments. As you mentioned, I did a custom deepcopy implementation in the maze environment and now I am able to deepcopy it. I have to test that it works properly and that the results that I obtain with the env are consistent. If so, I will do myself the pull request.
EZpickling should be supported instead
First of all, thank you very much for your work and contribution to the reinforcement learning community! :cowboy_hat_face:
Problem
I am trying to use the Mazes environments into garage library. However, I've been strugling a lot until I found what I think it is the main issue. In garage, some algorithms implement a task sampler to create different instances of the environments. To achieve this, they do a deepcopy of the environments. Here is where I found that MiniWorld environments can't be deepcopied. I just would like to know how to modify them in order to be deepcopied.
Code Snippet
Here you are a code snippet to reproduce the issue:
Traceback
If you execute the previous code, you will see an error traceback like this one:
Solution
For the moment I haven't been able to find any solution to this problem. It is the first time I use deepcopy, so I am not an expert. Also, I do not know enough of the MiniWorld internal workings, so I can't figure where to start out. I just would like to ask you to suggest me some hints in order to know where to start modifying the environments. Maybe just some kind of wrapper will make it work.
Thanks in advance! :smile_cat: