Open richardrl opened 6 years ago
Are there any updates on this?
met same issue here, track into this file: ./mujoco_py/gl/eglshim.c call eglGetError() with the place this issue happens(after eglMakeCurrent(eglDpy, EGL_NO_SURFACE, EGL_NO_SURFACE, eglCtx)), eglGetError returns 0x3002, EGL_BAD_ACCESS, no idea from this place...
mjpro150/sample/record with same logic works fine ...
Did anyone solve this problem?
I've found that if I want to call env.render()
then I need to set LD_PRELOAD
to /usr/lib/x86_64-linux-gnu/libGLEW.so:/usr/lib/nvidia-384/libGL.so
.
However, if I want to call env.sim.render(w, h)
, then I need to not set LD_PRELOAD
(e.g. run unset LD_PRELOAD
).
@vitchyr Thx a lot!
I got the same problem,too.How to fix it?
@chenyiwen97 have you tried the solution posted above?
@chenyiwen97 have you tried the solution posted above?
Yes.I add the 'export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libGLEW.so:/usr/lib/nvidia-415/libGL.so' to the bashrc, but it still doesn't work.
@chenyiwen97 have you tried the solution posted above?
Sorry, I misunderstood what you said. Now it works.Thanks a lot.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/nvidia-384 This works for me.
when training, the following has to be commented, unless do testing. export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libGLEW.so
Is there a solution to this (can't use env.render and env.sim.render in the same environment) yet?
I'm experiencing the same issue. And my current work around is to set and unset environment variable "LD_PRELOAD" everytime in python scripts.
Basically there is no way to call env.render()
and env.sim.render(w, h)
in the same environment because the first one requires export LD_PRELOAD=/usr/lib/...../libGLEW.so
but the second one requires this line commented.
After lots of trying and lots of reading, with no success, I tried (for no reason at all) something that worked! Using the mode as rgb_array in env.sim.render(), I received a warning that mode should be offscreen or window. Offscreen was exactly the mode in that the error was happening... so I changed to window. At first, the only change was that, immediately after loading the env, a render screen has shown up, without the error from offscreen mode. Then, without closing the render window, I decided to run the command env.sim.render() with offscreen mode on the python shell, and it worked! I received the image without any error! So I included in my code a line calling env.sim.render() with mode window before the line with mode offscreen, which gets the camera image.
my code lines (inside my gym environment): self.sim.render(mode='window', camera_name='first-person', width=16, height=16, depth=False) img = self.sim.render(mode='offscreen', camera_name='first-person', width=16, height=16, depth=False)
The most import is to check the usage inside env.render()
:
viewer =mujoco_py.MjRenderContextOffscreen(sim, dveice_id=)
or
viewer = mujoco_py.MjViewer(sim)
The second one should has a LD_PRELOAD=
to /usr/lib/x86_64-linux-gnu/libGLEW.so:/usr/lib/nvidia-384/libGL.so
as mentioned above.
undsetting LD_PRELOAD does't work for me, is there a workaround that doesn't involve it? edit : This is an issue related to openGL and ubuntu nvidia drivers, move to a non ubuntu distro to fix this issue
I also had this problem, but it had to do with incorrect Ubuntu version. I had this error message using env.sim.render
with Ubuntu 20.04, but once I reinstalled my machine with Ubuntu 18.04, the program ran perfectly.
Here is a better solution I found: https://github.com/openai/mujoco-py/issues/390#issuecomment-525385434
Here is a full-writing targeting our HPC cluster: https://github.com/geyang/jaynes-starter-kit/tree/master/07_supercloud_setup
I don't even have a /usr/lib/nvidia-384 folder. Only nvidia folder without any .so files inside. But actually CUDA is installed and working properly.
Normal rendering in widow is working but in offscreen context not.
it fails on: data = self.sim.render(width=width, height=height, camera_name=camera_name)
unset LD_PRELOAD
helped
I don't even have a /usr/lib/nvidia-384 folder. Only nvidia folder without any .so files inside. But actually CUDA is installed and working properly. Normal rendering in widow is working but in offscreen context not. it fails on:
data = self.sim.render(width=width, height=height, camera_name=camera_name)
unset LD_PRELOAD
helped
I have the same issue as you, but unset LD_PRELOAD
is useless for me.
@QUIlToT as far as I understand you can not use the gym for rendering in window as well as in background at the same time on Ubuntu. You need to switch between using LD_PRELOAD
and not.
@QUIlToT as far as I understand you can not use the gym for rendering in window as well as in background at the same time on Ubuntu. You need to switch between using
LD_PRELOAD
and not.
Thx! You made my day!
I don't even have a /usr/lib/nvidia-384 folder. Only nvidia folder without any .so files inside. But actually CUDA is installed and working properly. Normal rendering in widow is working but in offscreen context not. it fails on:
data = self.sim.render(width=width, height=height, camera_name=camera_name)
unset LD_PRELOAD
helped
Hi, I'm running code below,
which throws RunTimeError at env.render(mode='rgb_array')
like the figure below:
Would you like to give any advice?
Thanks, Mango
I was rendering on a remote server and met the same problem. I tried to open an VNC server with port 9 and export DISPLAY=:9
in the terminal, and then the problem was fixed.
@huangjiancong1's and @vitchyr's answer guided me to the right solution on an ARM M1 Mac running an Ubuntu 20.04 VM. The NVIDIA drivers don't exist on this system (which has an Apple GPU), but making libGLX_mesa available for linking fixed things for me.
export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libGLEW.so:/usr/lib/aarch64-linux-gnu/libGLX_mesa.so.0
For rgb array mode, this code works for me:
self.viewer = mujoco_py.MjRenderContextOffscreen(self.sim, None, device_id=-1)
img = self.viewer.read_pixels(512, 512, depth=False)[::-1,:,:]
by default my cuda visible device is 0.
I want to use raw image observations, so I am trying to call the _read_pixels_as_in_window(self) in mjviewer.py. However, getting this error:
My setup works for training and rendering video of the non-pixel Mujoco environment on my screen, but this '_read_pixels_as_in_window(self)', which I am using to access raw image data, fails. And actually even this method was working until I tried installing CUDA (which overwrote my nvidia-384 driver with nvidia-390') and installing CUDA broke everything, so now I think I rolled everything back to nvidia-384 properly and yet this read_pixels is still not working.