-
Thank you very much for your work. Below is a bug I encountered while reproducing
![GZTNR~H%1T~ZSM}7(9S}D3V](https://github.com/NVlabs/RVT/assets/49465594/f0cb13b5-ffcd-4637-9a20-b5ba527e5df7)
I…
-
Hi Pierre-Louis,
The idea of this paper is great, I am trying to reproduce the results in your paper.
However, I have encountered some problems in reproducing the single-task learning in Table 2 o…
-
Dear Developers,
Thanks for your contributions.
In rlbench_gym.py, when I tyied to generate two gym environments as follows:
env1 = gym.make('reach_target-state-v0', render_mode='human')
e…
-
1. Is there a way I can create multiple environment ?
Currently, If I try to make more than 1 environment, I got following error:
```
env = gym.make('reach_target-state-v0')
env2 = gym.make('reach…
-
-
Your work is appealing, thanks a lot for the effort. I am currently trying to train peract with real world task. I have found your explanation about data collection is sufficient. However, how to trai…
-
Hello Mohit,
Amazing work! And thank you so much for organizing this notebook with elaborate descriptions. It was very helpful.
I have a question regarding episode length in the `extract_obs` fu…
-
Hi Mohit,
I am using your code recently and trying to do the multi-GPU training. But I find that the multi-gpu and DDP usage in your code seems a bit strange for me to understand.
Specifically, i…
-
Hi,
Thank you for sharing your great work on github.
I'm wondering what is a correct way to open "variation_descriptions.pkl" ?
`with open("variation_descriptions.pkl") as f: pickle.load(f)` cu…
-
Hi. I think ARM is an excellent work for the intelligent robot. When I ran the code with the latest RLBench 1.1.0 release, I met some errors about RLBench.
```
Traceback (most recent call last):
…