google-research / ravens

Train robotic agents to learn pick and place with deep learning for vision-based manipulation in PyBullet. Transporter Nets, CoRL 2020.
https://transporternets.github.io
Apache License 2.0
538 stars 93 forks source link

Simulation #23

Open anjugopinath opened 2 years ago

anjugopinath commented 2 years ago

Hi,

Is it possible to run a simulator to view the gripper in action?

ZhouYFeng commented 1 year ago

Hi,

Have you solved your problem? @anjugopinath

I am trying the training and testing process by SSH via VScode as well, so there is no GUI. If there some ways I can get the dynamic view of the simulation?

Thanks.

SantiDiazC commented 11 months ago

Hi,

I'm not sure you are still interested in it or you solved it already. In case you still interested you can just see the demos.py file. when you set the flag --disp=True when running the python script and instantiate the environment, there it start the pybullet client. in that case if disp==True then the client uses the GUI interface. Then they use the debugVisualizerCamera and they set the configuration so the camera is located appropriately looking to the scene (ravens/environments/environment.py):

  if disp:
    target = p.getDebugVisualizerCamera()[11]
    p.resetDebugVisualizerCamera(
        cameraDistance=1.1,
        cameraYaw=90,
        cameraPitch=-25,
        cameraTargetPosition=target)

Now the visualizer is set every time you call "env.step()" the image in the debugger is updated. There is an implementation of env.render() as well, but it returns the color image (numpy array) of the same camera as the debugger visualizer, so you may see it using opencv (cv2.imshow()) for example. The only problem is the render method only return the last frame of the step method, so I think its more useful and easier to use the debugger visualizer the way it is implemented.

I hope it may be useful!