Open kevin-thankyou-lin opened 2 years ago
The cause of the issue is that it seems like I need to first take env.step(action)
before rendering.
Specifically,
post-env.reset()
and pre-env.step()
rendering gives:
(table and robot both collapsed to the center of the scene)
post-env.reset()
and post-env.step()
rendering gives:
A simple demo of this behavior could be reproduced in demo_renderer.py
env.reset()
for i in range(10000):
# action = np.random.uniform(low, high)
# obs, reward, done, _ = env.step(action)
env.viewer.renderer.set_camera(np.ones(3), np.zeros(3), np.array([0, 0, 1]))
frame = cv2.cvtColor(np.concatenate(env.viewer.renderer.render(modes=("rgb")), axis=1), cv2.COLOR_RGB2BGR)
cv2.imwrite("test{}.png".format(i), (frame*255).astype(np.uint8))
I think the 'correct' behavior should be env.reset()
resetting the arm, table, etc. to the correct starting positions under open-ai standards (could be wrong though!)?
Hi!
When I try to manually set an igibson renderer's extrinsics via
renderer.set_camera(camera_pos, poi, up)
, I seem to need to negate thepoi
so that the rendered RGB gives the correct image. On the other hand, when I do the same thing in pybullet (directly in igibson), I don't need to negate the poiHere are some
(poi, -poi)
pairs:Pair 1
Pair 2
Pair 3
And for reference, the
poi
is the cube and the camera positions are sampled from the following hemisphere of green spheres:Also, would you know why the rendered cube doesn't have texture?
I'm using the
BoxObject
directly insidelift.py
: