Closed slerman12 closed 2 years ago
Is this for environments under suite
or locomotion
submodules?
If the latter, the walker
base class has an egocentric camera property that can you should be able to render from -- I think simply by specifying its name to the camera_id
argument in the render
method.
We can probably use locomotion
. It seems like the method you linked is NotImplemented
?
@slerman12 That’s the base class, concrete classes like cmu_humanoid and ant have it implemented.
@slerman12 Here's a code example with a randomly chosen task from the locomotion suite:
from dm_control import composer
from dm_control.locomotion import arenas
from dm_control.locomotion.mocap import cmu_mocap_data
from dm_control.locomotion.tasks.reference_pose import tracking
from dm_control.locomotion.walkers import cmu_humanoid
from PIL import Image
if __name__ == "__main__":
# Use a position-controlled CMU humanoid walker.
walker_type = cmu_humanoid.CMUHumanoidPositionControlledV2020
# Build an empty arena.
arena = arenas.Floor()
task = tracking.MultiClipMocapTracking(
walker=walker_type,
arena=arena,
ref_path=cmu_mocap_data.get_path_for_cmu(version="2020"),
dataset="walk_tiny",
ref_steps=(1, 2, 3, 4, 5),
min_steps=10,
reward_type="comic",
)
# Enable the egocentric camera observable.
# Defined: https://github.com/deepmind/dm_control/blob/main/dm_control/locomotion/walkers/cmu_humanoid.py#L448
task._walker.observables.egocentric_camera.enabled = True
env = composer.Environment(
time_limit=30, task=task, random_state=0, strip_singleton_obs_buffer_dim=True
)
timestep = env.reset()
Image.fromarray(timestep.observation["walker/egocentric_camera"]).show()
I get this:
Wow, thanks!
I'm in a computer vision lab. We'd be interested in running some experiments using egocentric visual inputs. Right now, we can render environment with the
dm_control.suite.wrappers.pixels.Wrapper
wrapper but we're not sure how to position the camera so that it tracks from the quadruped/walker/humanoid's head. Is this possible?