-
Hi, Thanks for your wonderful work!
I would like to inquire about the training cost on the RLBench dataset. Does the model have significant efficiency advantages compared to the PerAct.
-
I run the command `bash online_evalution_rlbench/eval_gnfactor.sh`. This took a lot of time.
I tried to set `headless=True` when initial the Environment class from `rlbench.environment`. However, th…
-
Hi Guanxing,
Thx for your good work!
I have tried to run the train script `bash scripts/train_and_eval_w_geo_sem_dyna.sh ManiGaussian_BC 0,1 12345 ${exp_name}`, but encountered an importError.
…
-
I'm trying to train from scratch, but when I finally get to step **#### 4. Train an ACT controller to follow spheres**, an error is reported:
```
In 'controller': ConfigTypeError raised while comp…
-
Hi, dear authors, thanks a lot for releasing your excellent work. Now, I mainly have two questions needing your help:
1. When training on simulated scenes, both CALVIN and RLBench, I found that the…
-
When looking at the task demonstration provided by rlbench, I found that the waypoint settings for object grabbing are somewhat different. Some objects have a waypoint set before close_gripper (for ex…
-
For the ALOHA dataset, one can generate the first proprio state using code from finetuning script:
```
from absl import app, flags, logging
import flax
import jax
import optax
import tensorflo…
-
Hi, thanks a lot for releasing your awesome work. Recently, I'm doing experiments to tackle the multi-task robot manipulation problem. Two widely used benckmarks are `RLBench` and `CALVIN`. I'm wonder…
-
Thank you for your wonderful open-sourced VLA models. For fintuning stage, I still have some questions to be solved.
I used RLBench data collected by myself to prepare RLDS data following your reposi…
-
Hello, and thank you for providing the source code. I encountered a RuntimeError when attempting to run the training script for the rk_diffuser. Here is the traceback:
```
Getting demos for task o…