google-deepmind / dm_control

Google DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo.
Apache License 2.0
3.76k stars 666 forks source link

How to test/run OpenGL visualization? #12

Closed vastsoun closed 5 years ago

vastsoun commented 6 years ago

Hi,

I installed dm_control yesterday on an Ubuntu 16.04LTS system with NVIDIA graphics card. I have already installed --and work with-- mjpro150 and 131 (both mujoco-py and custom C++ interface) so I know those work fine.

I created the following "quickstart" script, using the provided one in the README:

import numpy as np
from dm_control import suite
# Load one task:
env = suite.load(domain_name="humanoid", task_name="stand", visualize_reward=True)
# Iterate over a task set:
for domain_name, task_name in suite.BENCHMARKING:
    env = suite.load(domain_name, task_name)
# Reset data
action_spec = env.action_spec()
time_step = env.reset()
# Step through an episode and print out reward, discount and observation.
while not time_step.last():
    action = np.random.uniform(action_spec.minimum,
                             action_spec.maximum,
                             size=action_spec.shape)
    time_step = env.step(action)
    pixels = env.physics.render()
    print(time_step.reward, time_step.discount, time_step.observation)

It "seems" to run since the print() call in the loop does indeed output such as:

0.9902381173079999 1.0 OrderedDict([('orientations', array([ 0.92190494, -0.38741616,  0.75086908, -0.66045108,  0.34782824,
        0.93755827,  0.01050299,  0.99994484, -0.34440243, -0.93882211,
       -0.02782136, -0.99961291,  0.52972116, -0.84817185])), ('height', 1.2930647420729058), ('velocity', array([ -0.49068857,   0.54707421,   2.52160751,   4.02860209,
        -1.37774209,  -4.43418408,   9.91898924, -14.24762728,
         0.23987449]))])

However, the MuJoCo OpenGL visualization never starts.

To get set-up, I created a virtualenv environment from scratch with python 3.5.2 as default (via PyCharm).

Also, I have to report that there is no Ubuntu package for libglew2.0 via apt-get on 16.04LTS. I had to download it from here and I not sure if this was the proper solution. I would appreciate it if someone who got it to work on Ubuntu 16.04 can report their steps.

Cheers,

Vassilios

efagerho commented 6 years ago

On Ubuntu 16.04 LTS the correct apt-get command is actually:

sudo apt-get install libglew-dev libglfw3-dev

I'm note sure what version the apt-get command is for, which is in the updated installation instructions.

saran-t commented 6 years ago

Apparently libglew2.0 doesn't exist until zesty (i.e. Ubuntu 17). For xenial (i.e. 16.04LTS) you can use libglew1.13 instead, see https://packages.ubuntu.com/search?keywords=libglew

The libglfw3 package should work in all recent versions of Ubuntu. Alternatively, as @efagerho suggested you can also install the -dev packages which will also pull in headers too, but this shouldn't be necessary.

hanyas commented 6 years ago

Is it maybe related to this #4? Seems like the render function is only used internally.

vastsoun commented 6 years ago

I tried #4 , but it is definitely NOT the desired solution. It renders each frame as an image using matplotlib. Why do this if the OpenGL window and visualization tools of MuJoCo work? The question is how to actually achieve the latter.

hanyas commented 6 years ago

I agree. I just found it weird that this solution was "recognized" by the dev and the issue closed without clarifying it.

ethanluoyc commented 6 years ago

If I understand correctly, they have not released the viewer (from their tech report that’s what they say).

Sent from my iPhone, sorry for the brevity.


From: hanyas notifications@github.com Sent: Monday, January 15, 2018 8:58:38 AM To: deepmind/dm_control Cc: Ethan Luo Yicheng; Manual Subject: Re: [deepmind/dm_control] How to test/run OpenGL visualization? (#12)

I agree. I just found it weird that this solution was "recognized" by the dev and the issue closed without clarifying it.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/deepmind/dm_control/issues/12#issuecomment-357619928, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AFwsuKBoLWDqqf3lWgkKyup6xUPYZCnpks5tKxM-gaJpZM4RW0zt.

alimuldal commented 6 years ago

We are working on an interactive viewer, but for the time being we only support offscreen rendering to numpy arrays.

ghost commented 6 years ago

Any news?

ChristianLemke commented 6 years ago

Any other workarounds?

NPC-Wang commented 6 years ago

import matplotlib.pyplot as plt ... plt.imshow(env.physics.render(height, width, camera_id=0)) plt.show(block=False) plt.pause(.01)

any news?

JinQiangWang2021 commented 6 years ago

Hi vastsoun Today I also meet this question, and I use the same demo code given by deepmind ,Unfortunately, it reminds me the following errors:

Exception AttributeError: "'NoneType' object has no attribute 'destroy_window'" in <bound method GLFWContext.del of <dm_control.render.glfw_renderer.GLFWContext object at 0x7ffba109c110>> ignored

Have you solve this question and what methods you use? I have to ask for your help ,

Thanks

alimuldal commented 6 years ago

I don't have any updates on the interactive viewer yet. We're all very busy with other projects, so we haven't been able to dedicate much time to this.

adriansahlman commented 6 years ago

I threw together a class to capture frames from dm_control environments.

import collections
import numpy as np
import matplotlib
import matplotlib.animation
import matplotlib.pyplot as plt

class DmControlRenderer:
    """Creates an object that captures environment states as pixel frames with the
        ability to create and display an animation of the frames.
    Arguments:
        width (int, optional): width of the captured frames.
        height (int, optional): height of the captured frames.
        camera_ids (sequence of ints, optional): The camera id's to render.
    """
    CACHED_FRAMES_COUNT = 200
    RENDER_METHODS = ['html', 'jshtml', 'matplotlib']
    def __init__(self, width=480, height=480, camera_ids=[0]):
        self.width = width
        self.height = height
        self.camera_ids = camera_ids

        self.env = None
        self.frames = None
        self.frame_count = 0

    def reset(self, env):
        """Remove old frames and set up to be able to capture new frames.
        Arguments:
            env (dm_control suite environment): Environment to capture frames from.
        """
        self.env = env
        self.frames = {camera_id: [] for camera_id in self.camera_ids}
        self.frame_count = 0

    def capture_state(self):
        """Captures the current state of the environment as a frame.
        """
        assert self.env is not None and self.frames is not None, \
            'Need to call reset() before capturing frames.'

        for camera_id in self.camera_ids:
            frame = self.env.physics.render(self.height, self.width, camera_id=camera_id)
            self.frames[camera_id].append(frame)

        self.frame_count += 1

    def render(self, camera_id=0, dt=10, method = 'html'):
        """Renders the frames captured from the environment.
        Arguments:
            camera_id (int or sequence of ints): The camera id or id's to render.
            dt (int, optional): The interval between frames in milliseconds.
            method (str, optional): The method used to render the video. Any html
                methods will attempt an import of `HTML` from `IPython.display`.
        """
        method = method.lower()
        assert method in self.RENDER_METHODS, \
            'Render method (not case sensitive) has to be one of {}.'.format(', '.join(
                ["'{}'".format(render_method) for render_method in self.RENDER_METHODS]))

        if isinstance(camera_id, collections.Sequence):
            for ci in camera_id:
                assert isinstance(ci, (int, np.integer)), \
                    'Camera ids can only be integers'
                assert ci in self.frames, \
                    'Camera id {} not tracked'.format(i)

            video = [np.hstack([self.frames[ci][frame_i] for ci in camera_id]) for frame_i in range(self.frame_count)]
        else:
            assert isinstance(camera_id, (int, np.integer)), \
                'Camera ids can only be integers'
            assert camera_id in self.frames, \
                'Camera id {} not tracked'.format(camera_id)

            video = self.frames[camera_id]

        if method != 'matplotlib':
            plt.ioff()

        fig = plt.figure()

        im = plt.imshow(video[0])

        def next_frame(idx):
            im.set_data(video[idx])
            return [im]

        animation = matplotlib.animation.FuncAnimation(fig, next_frame, interval=dt,
                                                       frames=len(video), save_count=self.CACHED_FRAMES_COUNT, blit=True)

        if method in ['html', 'jshtml']:
            from IPython.display import display, HTML

        if method == 'html':
            h = HTML(animation.to_html5_video())
            display(h)
        elif method == 'jshtml':
            h = HTML(ani.to_jshtml())
            display(h)
        elif method == 'matplotlib':
            return animation
        else:
            raise RuntimeError('No handler implemented for specified method \'{}\'.'.format(method))

        plt.ion()

Just call capture_state() after every step in the environment. I have only tested this in jupyter notebook.

Call render() to create an animation (can be displayed as html5 video and other formats)

Hope this helps anyone who hasnt figured out how to animate the frames.

superxingzheng commented 6 years ago

Hi @alimuldal, Any updates on the interactive viewer?

JAEarly commented 6 years ago

I couldn't get the above Matplotlib methods to work on my laptop (it would render really slowly), so I wrote an alternative implementation using OpenCV. On the first pass it captures the frames as the environment is stepped through and saves it to a video file (mp4). On the second pass the video file is played back without needing to actually step through the environment. The second pass is not required if you merely want to save the episode to a file. The environment render call uses RGB, but OpenCV works on BGR so a conversion is required before rendering to the video file.

from dm_control import suite
import numpy as np
import cv2

def grabFrame(env):
    # Get RGB rendering of env
    rgbArr = env.physics.render(480, 600, camera_id=0)
    # Convert to BGR for use with OpenCV
    return cv2.cvtColor(rgbArr, cv2.COLOR_BGR2RGB)

# Load task:
env = suite.load(domain_name="cartpole", task_name="swingup")

# Setup video writer - mp4 at 30 fps
video_name = 'video.mp4'
frame = grabFrame(env)
height, width, layers = frame.shape
video = cv2.VideoWriter(video_name, cv2.VideoWriter_fourcc(*'mp4v'), 30.0, (width, height))

# First pass - Step through an episode and capture each frame
action_spec = env.action_spec()
time_step = env.reset()
while not time_step.last():
    action = np.random.uniform(action_spec.minimum,
                               action_spec.maximum,
                               size=action_spec.shape)
    time_step = env.step(action)
    frame = grabFrame(env)
    # Render env output to video
    video.write(grabFrame(env))

# End render to video file
video.release()

# Second pass - Playback
cap = cv2.VideoCapture(video_name)
while(cap.isOpened()):
    ret, frame = cap.read()
    cv2.imshow('Playback', frame)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break
cap.release()

# Exit
cv2.destroyAllWindows()
wpumacay commented 6 years ago

Hi.

I just opened a pull request for this feature ( #45 ). It would be great if someone else could test it (in the lab we only have one student license :( ) to see if there are any issues that I might have not fixed in time.

alimuldal commented 5 years ago

We added an interactive environment viewer in d3ee413834db2c62e7378b739e4fbc4d7738f1f3. Please see here for usage instructions.