Lifelong-Robot-Learning / LIBERO

Benchmarking Knowledge Transfer in Lifelong Robot Learning
MIT License
171 stars 29 forks source link

Does the init file provided in the code correspond to the Demonstraion in the dataset? #12

Closed xwinks closed 8 months ago

xwinks commented 8 months ago

Thanks for this wonderful benchmark.

I want to replay the demonstration to record more information, however, I found the initial state of objects does not correspond to the demonstration.

The first video is the RGB demonstration provided by the dataset, and the second is my replay trajectory.

https://github.com/Lifelong-Robot-Learning/LIBERO/assets/49355711/141d7422-eb01-47f9-9c3d-abbc1467b4e3

https://github.com/Lifelong-Robot-Learning/LIBERO/assets/49355711/931908b9-7ec6-4914-bfbc-1d79cb8b024d

Here is my code to replay the trajectory:

from libero.libero import benchmark
from libero.libero.envs import OffScreenRenderEnv,DemoRenderEnv
import os
# import init_path
from libero.libero import benchmark, get_libero_path
from robosuite.wrappers import DataCollectionWrapper, VisualizationWrapper

import h5py

benchmark_dict = benchmark.get_benchmark_dict()
task_suite_name = "libero_object" # can also choose libero_spatial, libero_object, etc.
task_suite = benchmark_dict[task_suite_name]()

# retrieve a specific task
task_id = 2
task = task_suite.get_task(task_id)
task_name = task.name
print("the task name is:", task_name)
task_description = task.language
task_bddl_file = os.path.join(get_libero_path("bddl_files"), task.problem_folder, task.bddl_file)
print(f"[info] retrieving task {task_id} from suite {task_suite_name}, the " + \
      f"language instruction is {task_description}, and the bddl file is {task_bddl_file}")

# step over the environment
env_args = {
    "bddl_file_name": task_bddl_file,
    "camera_heights": 128,
    "camera_widths": 128,
    "has_renderer": True,
    "has_offscreen_renderer": False,
}
env = DemoRenderEnv(**env_args)

env = VisualizationWrapper(env.env)

env.seed(0)
env.reset()
init_states = task_suite.get_task_init_states(task_id) # for benchmarking purpose, we fix the a set of initial states
init_state_id = 40
print("init states is:", init_states.shape)
# env.set_init_state(init_states[init_state_id])
env.sim.set_state_from_flattened(init_states[init_state_id])
print("the task_bddl_file is", task_bddl_file)

dataset_path = "libero/datasets/libero_object/pick_up_the_salad_dressing_and_place_it_in_the_basket_demo.hdf5"

data = h5py.File(dataset_path)["data"]["demo_40"]["actions"]
print("data is:", data.shape)

for action in data:
    obs, reward, done, info = env.step(action)
    env.render()

env.close()

If it is due to the mismatch of the initial state, how can I get the corresponding initial state of the demonstrations so that I can replay the demonstration to get more information?

xwinks commented 8 months ago

I found the initial state could be obtained in the demonstrations["state"], I'm sorry to bother you!