Genesis-Embodied-AI / RoboGen

A generative and self-guided robotic agent that endlessly propose and master new skills.
Apache License 2.0
552 stars 50 forks source link

visualization #15

Closed Guanbin-Huang closed 8 months ago

Guanbin-Huang commented 8 months ago

it seems like there's nothing to visualize?

I only get the folloiwng image

how to get the visuzlaition like what you did in the paper?

image
Guanbin-Huang commented 8 months ago

i run python run.py

it seems like there's nothing to visualize?

I only get the folloiwng image

how to get the visuzlaition like what you did in the paper? image

yufeiwang63 commented 8 months ago

The photo-realistic rendering as shown in the paper is only supported using Genesis. This repo contains a re-implementation in pybullet, which uses pybullet's default rendering and the rendering quality will be lower.

Running python run.py will first randomly generate some tasks, which is the output from your terminal, and then try to solve the task, which will then produce some gifs of the robot doing the task.

If you want to quickly see the robot working, you can try skip the task generation and directly learn some pre-generated tasks, by running python execute_locomotion.py --task_config_path example_tasks/task_Turn_right/Turn_right.yaml or python execute.py --task_config_path example_tasks/Change_Lamp_Direction/Change_Lamp_Direction_The_robotic_arm_will_alter_the_lamps_light_direction_by_manipulating_the_lamps_head.yaml. It should take roughly ~10-20 minutes to solve the locomotion task with CEM, and 3 hours to solve the manipulation task with RL.

See https://github.com/Genesis-Embodied-AI/RoboGen/issues/7#issuecomment-1804924329 about where the visuals of the learning results will be stored.