-
Traceback (most recent call last): File "online_evaluation_rlbench/evaluate_policy.py", line 194, in var_success_rates = env.evaluate_task_on_multiple_variations( File "/3d_diffuser_actor/uti…
-
Thanks for your great work. I want to try the palyground.ipynb, however I don't have access to GPT-4. So I change all 'gpt-4' in the rlbench_config.yaml to 'gpt-3.5'. And here comes the error 'Invalid…
-
(rlxintong) hgdx@hgdx-System:~/RLBench/examples$ python imitation_learning.py
Traceback (most recent call last):
File "imitation_learning.py", line 40, in
demos = np.array(demos).flatten()
…
-
Thanks for your amazing work.
In my experiments, only the RLBench PutRubbishInBin task was completed very well using GPT3.5, and all other tasks failed, is there a significant difference between…
-
hello, I have a question, how does RlBench get the target position? do you use image and vision or just use the environment to give the information about the end-effector and target position from the …
-
Hi Stephen,
C2F-ARM is an ingenious method and I tried to replicate the learning curves in the paper. However, I failed in achieving the same performance as the paper for tasks "stack_wine" and "ph…
-
Command used : bash online_evaluation_rlbench/eval_peract.sh
Loading model from train_logs/diffuser_actor_peract.pth
Gripper workspace
Gripper workspace size: [0.75823578 1.07414986 0.79873248]
…
-
I found that the generated depth data from gen_demonstration was quite different from other depth data. Do you think it is a intended result?
-
Thanks for your great work!
When I run
```
python src/train.py exp_rlbench_diffusion_policy=base rlbench_task=turn_tap exp_rlbench_diffusion_policy/rlbench_model@rlbench_model=pretrained_multimae…
-
Hi,
I'm wondering if it is possible to add support for computing forward kinematics for robot arms in RLBench, so that we can compute the actual required end-effector pose from `executed_demo_joint…