Closed janbolle closed 1 year ago
Since this merge https://github.com/rlworkgroup/metaworld/pull/143
Hello @janbolle ! Thank you for making an issue for this! I am aware of this problem since it popped up in the summer but at that time Meta-World was going through considerable changes in the API so I wanted to wait a bit for the environment to mature so we won't need to keep redoing work. Meanwhile, @seba-1511 is developing an RL wrapper for learn2learn to easily integrate RL environments. Because of the above, I have left this work a bit on the side until I find more time to work on it. That being said I will try to find some time this week to see if I can find a quick workaround to have something running and will get back to you.
Of course there is always the hotfix of install the Meta-World version before their API rework if you want to just have something to start playing with, but I would really not recommend it for serious work since they have fixed a lot of issues since then.
@Kostis-S-Z, thank you very much for your detailed reply! A quick workaround would be really nice! I tried to understand what's going on, but I am not into it that deep, so I couldn't get it running in a short period of time.
Yes, I am doing your quick-fix, but as stated by you, it's not the best option :-)
I am very thankful for any help on this issue.
@Kostis-S-Z, are there any news? Is there something I can do quickly? 🙈
@janbolle I am really sorry I haven't been active on this, too many things on my plate these months :/ it seems their API has changed a bit but the basic gym functionality of .reset()
, .sample()
and .step()
still works the same way. By taking a quick look i think what I would maybe try would be to complete remove MultiClassMultiTaskEnv
and instead import the different task environments one by one in metaworld.py like so.
Then you would change set_task()
and get_task()
to assign / get a variable e.g
self.current_task = ALL_V2_ENVIRONMENTS_GOAL_OBSERVABLE["door-open-v2-goal-observable"]()
and use it as previously.
This is just an idea, and maybe there is a different / better way of going about it, but unfortunately I do not have any plans to work on this right now (even though I would love to actually!). Feel free to try and integrate it and share your troubles here (I am feeling hopeful about it since their API rework seems much more structured right now) :)
Closing: inactive.
Dear @Kostis-S-Z, unfortunately, the meta-world example doesn't work anymore. In the latest version of meta-world, the file MultiClassMultiTaksEnv is missing. Also, the imports of ML1, ML10, and ML45 have changed. If I import from the right files and use your dummy_MujocoEnv, there is also a problem with the input params of the init.
Do you have an idea on how to solve this issue? Any help would be very nice :-)