avisingh599 / reward-learning-rl

[RSS 2019] End-to-End Robotic Reinforcement Learning without Reward Engineering
https://sites.google.com/view/reward-learning-rl/
Other
367 stars 68 forks source link

ray.exceptions.RayActoError when running examples #20

Closed Jendker closed 4 years ago

Jendker commented 4 years ago

I use anaconda version on Ubuntu 18.04.

I installed conda version with slightly changed requirements.txt:

git+https://github.com/hartikainen/mujoco-py.git

instead of :

git+https://github.com/hartikainen/mujoco-py.git@29fcd26290c9417aef0f82d0628d29fa0dbf0fab

The when running the example program from README I get the following errors for every algorithm:

(softlearning) ➜  reward-learning-rl git:(master) ✗ softlearning run_example_local examples.classifier_rl \
--n_goal_examples 10 \
--task=Image48SawyerDoorPullHookEnv-v0 \
--algorithm RAQ \    
--num-samples 5 \
--n_epochs 300 \
--active_query_frequency 10

/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.25.1) or chardet (3.0.4) doesn't match a supported version!
  RequestsDependencyWarning)

WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
  * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
  * https://github.com/tensorflow/addons
If you depend on functionality not listed there, please file an issue.

WARNING: Logging before flag parsing goes to stderr.
I1120 15:10:47.455576 140499140376000 __init__.py:34] MuJoCo library version is: 200
Warning: robosuite package not found. Run `pip install robosuite` to use robosuite environments.
I1120 15:10:47.474585 140499140376000 __init__.py:333] Registering multiworld mujoco gym environments
I1120 15:10:48.355936 140499140376000 __init__.py:14] Registering goal example multiworld mujoco gym environments
2019-11-20 15:10:48,428 INFO node.py:469 -- Process STDOUT and STDERR is being redirected to /tmp/ray/session_2019-11-20_15-10-48_4328/logs.
2019-11-20 15:10:48,537 INFO services.py:407 -- Waiting for redis server at 127.0.0.1:49660 to respond...
2019-11-20 15:10:48,655 INFO services.py:407 -- Waiting for redis server at 127.0.0.1:39623 to respond...
2019-11-20 15:10:48,657 INFO services.py:804 -- Starting Redis shard with 3.35 GB max memory.

======================================================================
View the dashboard at http://10.152.10.45:8080/?token=dd68d1fa6ceee3a8e9315d6ec51c1c286dcc6c099b9ba066
======================================================================

2019-11-20 15:10:48,757 INFO node.py:483 -- Process STDOUT and STDERR is being redirected to /tmp/ray/session_2019-11-20_15-10-48_4328/logs.
2019-11-20 15:10:48,758 INFO services.py:1427 -- Starting the Plasma object store with 5.03 GB memory using /dev/shm.
2019-11-20 15:10:48,923 INFO tune.py:64 -- Did not find checkpoint file in /home/jedrzej/ray_results/multiworld/mujoco/Image48SawyerDoorPullHookEnv-v0/2019-11-20T15-10-48-2019-11-20T15-10-48.
2019-11-20 15:10:48,923 INFO tune.py:211 -- Starting a new experiment.
== Status ==
Using FIFO scheduling algorithm.
Resources requested: 0/8 CPUs, 0/1 GPUs
Memory usage on this node: 6.4/16.8 GB

== Status ==
Using FIFO scheduling algorithm.
Resources requested: 8/8 CPUs, 0/1 GPUs
Memory usage on this node: 6.4/16.8 GB
Result logdir: /home/jedrzej/ray_results/multiworld/mujoco/Image48SawyerDoorPullHookEnv-v0/2019-11-20T15-10-48-2019-11-20T15-10-48
Number of trials: 5 ({'RUNNING': 1, 'PENDING': 4})
PENDING trials:
 - 9fb14214-algorithm=RAQ-seed=7383:    PENDING
 - 8decac7c-algorithm=RAQ-seed=2376:    PENDING
 - 53f276a8-algorithm=RAQ-seed=6849:    PENDING
 - cd59f5d5-algorithm=RAQ-seed=867: PENDING
RUNNING trials:
 - 384203e9-algorithm=RAQ-seed=3478:    RUNNING

(pid=4407) /home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.25.1) or chardet (3.0.4) doesn't match a supported version!
(pid=4407)   RequestsDependencyWarning)
(pid=4407) Warning: robosuite package not found. Run `pip install robosuite` to use robosuite environments.
(pid=4407) 
(pid=4407) WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
(pid=4407) For more information, please see:
(pid=4407)   * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
(pid=4407)   * https://github.com/tensorflow/addons
(pid=4407) If you depend on functionality not listed there, please file an issue.
(pid=4407) 
(pid=4407) 2019-11-20 15:10:51.510070: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
(pid=4407) 2019-11-20 15:10:51.516083: E tensorflow/stream_executor/cuda/cuda_driver.cc:300] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
(pid=4407) 2019-11-20 15:10:51.516111: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:161] retrieving CUDA diagnostic information for host: desktop-in-the-corner
(pid=4407) 2019-11-20 15:10:51.516117: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:168] hostname: desktop-in-the-corner
(pid=4407) 2019-11-20 15:10:51.516152: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:192] libcuda reported version is: 410.48.0
(pid=4407) 2019-11-20 15:10:51.516171: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:196] kernel reported version is: 410.48.0
(pid=4407) 2019-11-20 15:10:51.516176: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:303] kernel version seems to match DSO: 410.48.0
(pid=4407) 2019-11-20 15:10:51.535151: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3998440000 Hz
(pid=4407) 2019-11-20 15:10:51.535673: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x5593ac2cfe80 executing computations on platform Host. Devices:
(pid=4407) 2019-11-20 15:10:51.535695: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): <undefined>, <undefined>
(pid=4407) Using seed 3478
(pid=4407) WARNING: Logging before flag parsing goes to stderr.
(pid=4407) F1120 15:10:53.162240 140695482914240 core.py:90] GLEW initalization error: Missing GL version
(pid=4407) Fatal Python error: Aborted
(pid=4407) 
(pid=4407) Stack (most recent call first):
(pid=4407)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 841 in emit
(pid=4407)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/logging/__init__.py", line 863 in handle
(pid=4407)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 891 in handle
(pid=4407)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/logging/__init__.py", line 1514 in callHandlers
(pid=4407)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 1055 in handle
(pid=4407)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/logging/__init__.py", line 1442 in _log
(pid=4407)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/logging/__init__.py", line 1372 in log
(pid=4407)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 1038 in log
(pid=4407)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 476 in log
(pid=4407)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 309 in fatal
(pid=4407)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/dm_control/mujoco/wrapper/core.py", line 90 in _error_callback
(pid=4407)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/multiworld/envs/mujoco/mujoco_env.py", line 152 in initialize_camera
(pid=4407)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/multiworld/core/image_env.py", line 75 in __init__
(pid=4407)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/multiworld/envs/mujoco/__init__.py", line 324 in create_image_48_sawyer_door_pull_hook_v0
(pid=4407)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/gym/envs/registration.py", line 86 in make
(pid=4407)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/gym/envs/registration.py", line 125 in make
(pid=4407)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/gym/envs/registration.py", line 183 in make
(pid=4407)   File "/home/jedrzej/GitHub/reward-learning-rl/softlearning/environments/utils.py", line 48 in get_goal_example_environment_from_variant
(pid=4407)   File "/home/jedrzej/GitHub/reward-learning-rl/examples/classifier_rl/main.py", line 30 in _build
(pid=4407)   File "/home/jedrzej/GitHub/reward-learning-rl/examples/development/main.py", line 77 in _train
(pid=4407)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/tune/trainable.py", line 151 in train
(pid=4407)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/function_manager.py", line 783 in actor_method_executor
(pid=4407)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/worker.py", line 887 in _process_task
(pid=4407)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/worker.py", line 990 in _wait_for_and_process_task
(pid=4407)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/worker.py", line 1039 in main_loop
(pid=4407)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/workers/default_worker.py", line 98 in <module>
2019-11-20 15:10:53,287 ERROR trial_runner.py:494 -- Error processing event.
Traceback (most recent call last):
  File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/tune/trial_runner.py", line 443, in _process_trial
    result = self.trial_executor.fetch_result(trial)
  File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/tune/ray_trial_executor.py", line 315, in fetch_result
    result = ray.get(trial_future[0])
  File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/worker.py", line 2193, in get
    raise value
ray.exceptions.RayActorError: The actor died unexpectedly before finishing this task.
2019-11-20 15:10:53,288 ERROR worker.py:1672 -- A worker died or was killed while executing task 00000000c1c94bc03ef5d18e0aaa637e52c77030.
2019-11-20 15:10:53,289 INFO ray_trial_executor.py:179 -- Destroying actor for trial 384203e9-algorithm=RAQ-seed=3478. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads.
(pid=4500) /home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.25.1) or chardet (3.0.4) doesn't match a supported version!
(pid=4500)   RequestsDependencyWarning)
(pid=4500) Warning: robosuite package not found. Run `pip install robosuite` to use robosuite environments.
(pid=4500) 
(pid=4500) WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
(pid=4500) For more information, please see:
(pid=4500)   * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
(pid=4500)   * https://github.com/tensorflow/addons
(pid=4500) If you depend on functionality not listed there, please file an issue.
(pid=4500) 
(pid=4500) Using seed 7383
(pid=4500) 2019-11-20 15:10:55.430361: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
(pid=4500) 2019-11-20 15:10:55.436419: E tensorflow/stream_executor/cuda/cuda_driver.cc:300] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
(pid=4500) 2019-11-20 15:10:55.436451: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:161] retrieving CUDA diagnostic information for host: desktop-in-the-corner
(pid=4500) 2019-11-20 15:10:55.436457: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:168] hostname: desktop-in-the-corner
(pid=4500) 2019-11-20 15:10:55.436492: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:192] libcuda reported version is: 410.48.0
(pid=4500) 2019-11-20 15:10:55.436512: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:196] kernel reported version is: 410.48.0
(pid=4500) 2019-11-20 15:10:55.436517: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:303] kernel version seems to match DSO: 410.48.0
(pid=4500) 2019-11-20 15:10:55.455115: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3998440000 Hz
(pid=4500) 2019-11-20 15:10:55.455574: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x55eec40fce80 executing computations on platform Host. Devices:
(pid=4500) 2019-11-20 15:10:55.455595: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): <undefined>, <undefined>
(pid=4500) WARNING: Logging before flag parsing goes to stderr.
(pid=4500) F1120 15:10:57.092349 140252929529280 core.py:90] GLEW initalization error: Missing GL version
(pid=4500) Fatal Python error: Aborted
(pid=4500) 
(pid=4500) Stack (most recent call first):
(pid=4500)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 841 in emit
(pid=4500)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/logging/__init__.py", line 863 in handle
(pid=4500)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 891 in handle
(pid=4500)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/logging/__init__.py", line 1514 in callHandlers
(pid=4500)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 1055 in handle
(pid=4500)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/logging/__init__.py", line 1442 in _log
(pid=4500)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/logging/__init__.py", line 1372 in log
(pid=4500)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 1038 in log
(pid=4500)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 476 in log
(pid=4500)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 309 in fatal
(pid=4500)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/dm_control/mujoco/wrapper/core.py", line 90 in _error_callback
(pid=4500)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/multiworld/envs/mujoco/mujoco_env.py", line 152 in initialize_camera
(pid=4500)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/multiworld/core/image_env.py", line 75 in __init__
(pid=4500)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/multiworld/envs/mujoco/__init__.py", line 324 in create_image_48_sawyer_door_pull_hook_v0
(pid=4500)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/gym/envs/registration.py", line 86 in make
(pid=4500)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/gym/envs/registration.py", line 125 in make
(pid=4500)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/gym/envs/registration.py", line 183 in make
(pid=4500)   File "/home/jedrzej/GitHub/reward-learning-rl/softlearning/environments/utils.py", line 48 in get_goal_example_environment_from_variant
(pid=4500)   File "/home/jedrzej/GitHub/reward-learning-rl/examples/classifier_rl/main.py", line 30 in _build
(pid=4500)   File "/home/jedrzej/GitHub/reward-learning-rl/examples/development/main.py", line 77 in _train
(pid=4500)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/tune/trainable.py", line 151 in train
(pid=4500)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/function_manager.py", line 783 in actor_method_executor
(pid=4500)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/worker.py", line 887 in _process_task
(pid=4500)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/worker.py", line 990 in _wait_for_and_process_task
(pid=4500)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/worker.py", line 1039 in main_loop
(pid=4500)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/workers/default_worker.py", line 98 in <module>
2019-11-20 15:10:57,222 ERROR trial_runner.py:494 -- Error processing event.
Traceback (most recent call last):
  File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/tune/trial_runner.py", line 443, in _process_trial
    result = self.trial_executor.fetch_result(trial)
  File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/tune/ray_trial_executor.py", line 315, in fetch_result
    result = ray.get(trial_future[0])
  File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/worker.py", line 2193, in get
    raise value
ray.exceptions.RayActorError: The actor died unexpectedly before finishing this task.
2019-11-20 15:10:57,223 INFO ray_trial_executor.py:179 -- Destroying actor for trial 9fb14214-algorithm=RAQ-seed=7383. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads.
2019-11-20 15:10:57,226 ERROR worker.py:1672 -- A worker died or was killed while executing task 00000000ba4a3474bd990e2fd3ad6d1a4710733a.
== Status ==
Using FIFO scheduling algorithm.
Resources requested: 0/8 CPUs, 0/1 GPUs
Memory usage on this node: 6.5/16.8 GB
Result logdir: /home/jedrzej/ray_results/multiworld/mujoco/Image48SawyerDoorPullHookEnv-v0/2019-11-20T15-10-48-2019-11-20T15-10-48
Number of trials: 5 ({'ERROR': 2, 'PENDING': 3})
ERROR trials:
 - 384203e9-algorithm=RAQ-seed=3478:    ERROR, 1 failures: /home/jedrzej/ray_results/multiworld/mujoco/Image48SawyerDoorPullHookEnv-v0/2019-11-20T15-10-48-2019-11-20T15-10-48/384203e9-algorithm=RAQ-seed=3478_2019-11-20_15-10-48haz3d1t1/error_2019-11-20_15-10-53.txt
 - 9fb14214-algorithm=RAQ-seed=7383:    ERROR, 1 failures: /home/jedrzej/ray_results/multiworld/mujoco/Image48SawyerDoorPullHookEnv-v0/2019-11-20T15-10-48-2019-11-20T15-10-48/9fb14214-algorithm=RAQ-seed=7383_2019-11-20_15-10-53bov79p2k/error_2019-11-20_15-10-57.txt
PENDING trials:
 - 8decac7c-algorithm=RAQ-seed=2376:    PENDING
 - 53f276a8-algorithm=RAQ-seed=6849:    PENDING
 - cd59f5d5-algorithm=RAQ-seed=867: PENDING

(pid=4403) /home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.25.1) or chardet (3.0.4) doesn't match a supported version!
(pid=4403)   RequestsDependencyWarning)
(pid=4403) Warning: robosuite package not found. Run `pip install robosuite` to use robosuite environments.
(pid=4403) 
(pid=4403) WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
(pid=4403) For more information, please see:
(pid=4403)   * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
(pid=4403)   * https://github.com/tensorflow/addons
(pid=4403) If you depend on functionality not listed there, please file an issue.
(pid=4403) 
(pid=4403) 2019-11-20 15:10:59.330464: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
(pid=4403) 2019-11-20 15:10:59.336562: E tensorflow/stream_executor/cuda/cuda_driver.cc:300] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
(pid=4403) 2019-11-20 15:10:59.336589: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:161] retrieving CUDA diagnostic information for host: desktop-in-the-corner
(pid=4403) 2019-11-20 15:10:59.336595: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:168] hostname: desktop-in-the-corner
(pid=4403) 2019-11-20 15:10:59.336630: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:192] libcuda reported version is: 410.48.0
(pid=4403) 2019-11-20 15:10:59.336649: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:196] kernel reported version is: 410.48.0
(pid=4403) 2019-11-20 15:10:59.336654: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:303] kernel version seems to match DSO: 410.48.0
(pid=4403) Using seed 2376
(pid=4403) 2019-11-20 15:10:59.355170: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3998440000 Hz
(pid=4403) 2019-11-20 15:10:59.356824: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x5571c899be80 executing computations on platform Host. Devices:
(pid=4403) 2019-11-20 15:10:59.356849: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): <undefined>, <undefined>
(pid=4403) WARNING: Logging before flag parsing goes to stderr.
(pid=4403) F1120 15:11:00.997669 140531732485568 core.py:90] GLEW initalization error: Missing GL version
(pid=4403) Fatal Python error: Aborted
(pid=4403) 
(pid=4403) Stack (most recent call first):
(pid=4403)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 841 in emit
(pid=4403)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/logging/__init__.py", line 863 in handle
(pid=4403)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 891 in handle
(pid=4403)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/logging/__init__.py", line 1514 in callHandlers
(pid=4403)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 1055 in handle
(pid=4403)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/logging/__init__.py", line 1442 in _log
(pid=4403)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/logging/__init__.py", line 1372 in log
(pid=4403)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 1038 in log
(pid=4403)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 476 in log
(pid=4403)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 309 in fatal
(pid=4403)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/dm_control/mujoco/wrapper/core.py", line 90 in _error_callback
(pid=4403)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/multiworld/envs/mujoco/mujoco_env.py", line 152 in initialize_camera
(pid=4403)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/multiworld/core/image_env.py", line 75 in __init__
(pid=4403)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/multiworld/envs/mujoco/__init__.py", line 324 in create_image_48_sawyer_door_pull_hook_v0
(pid=4403)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/gym/envs/registration.py", line 86 in make
(pid=4403)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/gym/envs/registration.py", line 125 in make
(pid=4403)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/gym/envs/registration.py", line 183 in make
(pid=4403)   File "/home/jedrzej/GitHub/reward-learning-rl/softlearning/environments/utils.py", line 48 in get_goal_example_environment_from_variant
(pid=4403)   File "/home/jedrzej/GitHub/reward-learning-rl/examples/classifier_rl/main.py", line 30 in _build
(pid=4403)   File "/home/jedrzej/GitHub/reward-learning-rl/examples/development/main.py", line 77 in _train
(pid=4403)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/tune/trainable.py", line 151 in train
(pid=4403)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/function_manager.py", line 783 in actor_method_executor
(pid=4403)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/worker.py", line 887 in _process_task
(pid=4403)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/worker.py", line 990 in _wait_for_and_process_task
(pid=4403)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/worker.py", line 1039 in main_loop
(pid=4403)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/workers/default_worker.py", line 98 in <module>
2019-11-20 15:11:01,121 ERROR trial_runner.py:494 -- Error processing event.
Traceback (most recent call last):
  File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/tune/trial_runner.py", line 443, in _process_trial
    result = self.trial_executor.fetch_result(trial)
  File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/tune/ray_trial_executor.py", line 315, in fetch_result
    result = ray.get(trial_future[0])
  File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/worker.py", line 2193, in get
    raise value
ray.exceptions.RayActorError: The actor died unexpectedly before finishing this task.
2019-11-20 15:11:01,122 INFO ray_trial_executor.py:179 -- Destroying actor for trial 8decac7c-algorithm=RAQ-seed=2376. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads.
2019-11-20 15:11:01,127 ERROR worker.py:1672 -- A worker died or was killed while executing task 00000000f21a19a109d06d8ceacb3983186c4952.
(pid=4409) /home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.25.1) or chardet (3.0.4) doesn't match a supported version!
(pid=4409)   RequestsDependencyWarning)
(pid=4409) Warning: robosuite package not found. Run `pip install robosuite` to use robosuite environments.
(pid=4409) 
(pid=4409) WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
(pid=4409) For more information, please see:
(pid=4409)   * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
(pid=4409)   * https://github.com/tensorflow/addons
(pid=4409) If you depend on functionality not listed there, please file an issue.
(pid=4409) 
(pid=4409) 2019-11-20 15:11:03.240185: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
(pid=4409) 2019-11-20 15:11:03.246380: E tensorflow/stream_executor/cuda/cuda_driver.cc:300] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
(pid=4409) 2019-11-20 15:11:03.246413: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:161] retrieving CUDA diagnostic information for host: desktop-in-the-corner
(pid=4409) 2019-11-20 15:11:03.246419: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:168] hostname: desktop-in-the-corner
(pid=4409) 2019-11-20 15:11:03.246455: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:192] libcuda reported version is: 410.48.0
(pid=4409) 2019-11-20 15:11:03.246475: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:196] kernel reported version is: 410.48.0
(pid=4409) 2019-11-20 15:11:03.246480: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:303] kernel version seems to match DSO: 410.48.0
(pid=4409) 2019-11-20 15:11:03.271211: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3998440000 Hz
(pid=4409) 2019-11-20 15:11:03.271839: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x55cb31502e80 executing computations on platform Host. Devices:
(pid=4409) 2019-11-20 15:11:03.271873: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): <undefined>, <undefined>
(pid=4409) Using seed 6849
(pid=4409) WARNING: Logging before flag parsing goes to stderr.
(pid=4409) F1120 15:11:04.917667 140652231005632 core.py:90] GLEW initalization error: Missing GL version
(pid=4409) Fatal Python error: Aborted
(pid=4409) 
(pid=4409) Stack (most recent call first):
(pid=4409)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 841 in emit
(pid=4409)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/logging/__init__.py", line 863 in handle
(pid=4409)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 891 in handle
(pid=4409)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/logging/__init__.py", line 1514 in callHandlers
(pid=4409)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 1055 in handle
(pid=4409)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/logging/__init__.py", line 1442 in _log
(pid=4409)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/logging/__init__.py", line 1372 in log
(pid=4409)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 1038 in log
(pid=4409)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 476 in log
(pid=4409)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 309 in fatal
(pid=4409)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/dm_control/mujoco/wrapper/core.py", line 90 in _error_callback
(pid=4409)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/multiworld/envs/mujoco/mujoco_env.py", line 152 in initialize_camera
(pid=4409)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/multiworld/core/image_env.py", line 75 in __init__
(pid=4409)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/multiworld/envs/mujoco/__init__.py", line 324 in create_image_48_sawyer_door_pull_hook_v0
(pid=4409)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/gym/envs/registration.py", line 86 in make
(pid=4409)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/gym/envs/registration.py", line 125 in make
(pid=4409)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/gym/envs/registration.py", line 183 in make
(pid=4409)   File "/home/jedrzej/GitHub/reward-learning-rl/softlearning/environments/utils.py", line 48 in get_goal_example_environment_from_variant
(pid=4409)   File "/home/jedrzej/GitHub/reward-learning-rl/examples/classifier_rl/main.py", line 30 in _build
(pid=4409)   File "/home/jedrzej/GitHub/reward-learning-rl/examples/development/main.py", line 77 in _train
(pid=4409)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/tune/trainable.py", line 151 in train
(pid=4409)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/function_manager.py", line 783 in actor_method_executor
(pid=4409)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/worker.py", line 887 in _process_task
(pid=4409)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/worker.py", line 990 in _wait_for_and_process_task
(pid=4409)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/worker.py", line 1039 in main_loop
(pid=4409)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/workers/default_worker.py", line 98 in <module>
2019-11-20 15:11:05,043 ERROR trial_runner.py:494 -- Error processing event.
Traceback (most recent call last):
  File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/tune/trial_runner.py", line 443, in _process_trial
    result = self.trial_executor.fetch_result(trial)
  File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/tune/ray_trial_executor.py", line 315, in fetch_result
    result = ray.get(trial_future[0])
  File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/worker.py", line 2193, in get
    raise value
ray.exceptions.RayActorError: The actor died unexpectedly before finishing this task.
2019-11-20 15:11:05,044 INFO ray_trial_executor.py:179 -- Destroying actor for trial 53f276a8-algorithm=RAQ-seed=6849. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads.
2019-11-20 15:11:05,047 ERROR worker.py:1672 -- A worker died or was killed while executing task 0000000011f7ea519b074068a3ee381ec07a9c5e.
== Status ==
Using FIFO scheduling algorithm.
Resources requested: 0/8 CPUs, 0/1 GPUs
Memory usage on this node: 6.5/16.8 GB
Result logdir: /home/jedrzej/ray_results/multiworld/mujoco/Image48SawyerDoorPullHookEnv-v0/2019-11-20T15-10-48-2019-11-20T15-10-48
Number of trials: 5 ({'ERROR': 4, 'PENDING': 1})
ERROR trials:
 - 384203e9-algorithm=RAQ-seed=3478:    ERROR, 1 failures: /home/jedrzej/ray_results/multiworld/mujoco/Image48SawyerDoorPullHookEnv-v0/2019-11-20T15-10-48-2019-11-20T15-10-48/384203e9-algorithm=RAQ-seed=3478_2019-11-20_15-10-48haz3d1t1/error_2019-11-20_15-10-53.txt
 - 9fb14214-algorithm=RAQ-seed=7383:    ERROR, 1 failures: /home/jedrzej/ray_results/multiworld/mujoco/Image48SawyerDoorPullHookEnv-v0/2019-11-20T15-10-48-2019-11-20T15-10-48/9fb14214-algorithm=RAQ-seed=7383_2019-11-20_15-10-53bov79p2k/error_2019-11-20_15-10-57.txt
 - 8decac7c-algorithm=RAQ-seed=2376:    ERROR, 1 failures: /home/jedrzej/ray_results/multiworld/mujoco/Image48SawyerDoorPullHookEnv-v0/2019-11-20T15-10-48-2019-11-20T15-10-48/8decac7c-algorithm=RAQ-seed=2376_2019-11-20_15-10-57fbcmzv1m/error_2019-11-20_15-11-01.txt
 - 53f276a8-algorithm=RAQ-seed=6849:    ERROR, 1 failures: /home/jedrzej/ray_results/multiworld/mujoco/Image48SawyerDoorPullHookEnv-v0/2019-11-20T15-10-48-2019-11-20T15-10-48/53f276a8-algorithm=RAQ-seed=6849_2019-11-20_15-11-013svgatwh/error_2019-11-20_15-11-05.txt
PENDING trials:
 - cd59f5d5-algorithm=RAQ-seed=867: PENDING

(pid=4411) /home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.25.1) or chardet (3.0.4) doesn't match a supported version!
(pid=4411)   RequestsDependencyWarning)
(pid=4411) Warning: robosuite package not found. Run `pip install robosuite` to use robosuite environments.
(pid=4411) 
(pid=4411) WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
(pid=4411) For more information, please see:
(pid=4411)   * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
(pid=4411)   * https://github.com/tensorflow/addons
(pid=4411) If you depend on functionality not listed there, please file an issue.
(pid=4411) 
(pid=4411) 2019-11-20 15:11:07.190590: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
(pid=4411) 2019-11-20 15:11:07.197197: E tensorflow/stream_executor/cuda/cuda_driver.cc:300] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
(pid=4411) 2019-11-20 15:11:07.197251: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:161] retrieving CUDA diagnostic information for host: desktop-in-the-corner
(pid=4411) 2019-11-20 15:11:07.197259: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:168] hostname: desktop-in-the-corner
(pid=4411) 2019-11-20 15:11:07.197314: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:192] libcuda reported version is: 410.48.0
(pid=4411) 2019-11-20 15:11:07.197344: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:196] kernel reported version is: 410.48.0
(pid=4411) 2019-11-20 15:11:07.197351: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:303] kernel version seems to match DSO: 410.48.0
(pid=4411) Using seed 867
(pid=4411) 2019-11-20 15:11:07.219099: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3998440000 Hz
(pid=4411) 2019-11-20 15:11:07.219622: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x5558ada2ee80 executing computations on platform Host. Devices:
(pid=4411) 2019-11-20 15:11:07.219647: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): <undefined>, <undefined>
(pid=4411) WARNING: Logging before flag parsing goes to stderr.
(pid=4411) F1120 15:11:08.866851 140064546551232 core.py:90] GLEW initalization error: Missing GL version
(pid=4411) Fatal Python error: Aborted
(pid=4411) 
(pid=4411) Stack (most recent call first):
(pid=4411)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 841 in emit
(pid=4411)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/logging/__init__.py", line 863 in handle
(pid=4411)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 891 in handle
(pid=4411)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/logging/__init__.py", line 1514 in callHandlers
(pid=4411)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 1055 in handle
(pid=4411)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/logging/__init__.py", line 1442 in _log
(pid=4411)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/logging/__init__.py", line 1372 in log
(pid=4411)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 1038 in log
(pid=4411)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 476 in log
(pid=4411)   File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 309 in fatal
(pid=4411)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/dm_control/mujoco/wrapper/core.py", line 90 in _error_callback
(pid=4411)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/multiworld/envs/mujoco/mujoco_env.py", line 152 in initialize_camera
(pid=4411)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/multiworld/core/image_env.py", line 75 in __init__
(pid=4411)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/multiworld/envs/mujoco/__init__.py", line 324 in create_image_48_sawyer_door_pull_hook_v0
(pid=4411)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/gym/envs/registration.py", line 86 in make
(pid=4411)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/gym/envs/registration.py", line 125 in make
(pid=4411)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/gym/envs/registration.py", line 183 in make
(pid=4411)   File "/home/jedrzej/GitHub/reward-learning-rl/softlearning/environments/utils.py", line 48 in get_goal_example_environment_from_variant
(pid=4411)   File "/home/jedrzej/GitHub/reward-learning-rl/examples/classifier_rl/main.py", line 30 in _build
(pid=4411)   File "/home/jedrzej/GitHub/reward-learning-rl/examples/development/main.py", line 77 in _train
(pid=4411)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/tune/trainable.py", line 151 in train
(pid=4411)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/function_manager.py", line 783 in actor_method_executor
(pid=4411)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/worker.py", line 887 in _process_task
(pid=4411)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/worker.py", line 990 in _wait_for_and_process_task
(pid=4411)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/worker.py", line 1039 in main_loop
(pid=4411)   File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/workers/default_worker.py", line 98 in <module>
2019-11-20 15:11:08,994 ERROR trial_runner.py:494 -- Error processing event.
Traceback (most recent call last):
  File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/tune/trial_runner.py", line 443, in _process_trial
    result = self.trial_executor.fetch_result(trial)
  File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/tune/ray_trial_executor.py", line 315, in fetch_result
    result = ray.get(trial_future[0])
  File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/worker.py", line 2193, in get
    raise value
ray.exceptions.RayActorError: The actor died unexpectedly before finishing this task.
2019-11-20 15:11:08,995 ERROR worker.py:1672 -- A worker died or was killed while executing task 000000007bf1b17a5387bcd84e9b0cb85e2fd64e.
2019-11-20 15:11:08,996 INFO ray_trial_executor.py:179 -- Destroying actor for trial cd59f5d5-algorithm=RAQ-seed=867. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads.
== Status ==
Using FIFO scheduling algorithm.
Resources requested: 0/8 CPUs, 0/1 GPUs
Memory usage on this node: 6.5/16.8 GB
Result logdir: /home/jedrzej/ray_results/multiworld/mujoco/Image48SawyerDoorPullHookEnv-v0/2019-11-20T15-10-48-2019-11-20T15-10-48
Number of trials: 5 ({'ERROR': 5})
ERROR trials:
 - 384203e9-algorithm=RAQ-seed=3478:    ERROR, 1 failures: /home/jedrzej/ray_results/multiworld/mujoco/Image48SawyerDoorPullHookEnv-v0/2019-11-20T15-10-48-2019-11-20T15-10-48/384203e9-algorithm=RAQ-seed=3478_2019-11-20_15-10-48haz3d1t1/error_2019-11-20_15-10-53.txt
 - 9fb14214-algorithm=RAQ-seed=7383:    ERROR, 1 failures: /home/jedrzej/ray_results/multiworld/mujoco/Image48SawyerDoorPullHookEnv-v0/2019-11-20T15-10-48-2019-11-20T15-10-48/9fb14214-algorithm=RAQ-seed=7383_2019-11-20_15-10-53bov79p2k/error_2019-11-20_15-10-57.txt
 - 8decac7c-algorithm=RAQ-seed=2376:    ERROR, 1 failures: /home/jedrzej/ray_results/multiworld/mujoco/Image48SawyerDoorPullHookEnv-v0/2019-11-20T15-10-48-2019-11-20T15-10-48/8decac7c-algorithm=RAQ-seed=2376_2019-11-20_15-10-57fbcmzv1m/error_2019-11-20_15-11-01.txt
 - 53f276a8-algorithm=RAQ-seed=6849:    ERROR, 1 failures: /home/jedrzej/ray_results/multiworld/mujoco/Image48SawyerDoorPullHookEnv-v0/2019-11-20T15-10-48-2019-11-20T15-10-48/53f276a8-algorithm=RAQ-seed=6849_2019-11-20_15-11-013svgatwh/error_2019-11-20_15-11-05.txt
 - cd59f5d5-algorithm=RAQ-seed=867: ERROR, 1 failures: /home/jedrzej/ray_results/multiworld/mujoco/Image48SawyerDoorPullHookEnv-v0/2019-11-20T15-10-48-2019-11-20T15-10-48/cd59f5d5-algorithm=RAQ-seed=867_2019-11-20_15-11-05k6so04ax/error_2019-11-20_15-11-08.txt

Traceback (most recent call last):
  File "/home/jedrzej/anaconda3/envs/softlearning/bin/softlearning", line 11, in <module>
    load_entry_point('softlearning', 'console_scripts', 'softlearning')()
  File "/home/jedrzej/GitHub/reward-learning-rl/softlearning/scripts/console_scripts.py", line 202, in main
    return cli()
  File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/click/core.py", line 764, in __call__
    return self.main(*args, **kwargs)
  File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
  File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/click/core.py", line 1137, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "/home/jedrzej/GitHub/reward-learning-rl/softlearning/scripts/console_scripts.py", line 71, in run_example_local_cmd
    return run_example_local(example_module_name, example_argv)
  File "/home/jedrzej/GitHub/reward-learning-rl/examples/instrument.py", line 228, in run_example_local
    reuse_actors=True)
  File "/home/jedrzej/anaconda3/envs/softlearning/lib/python3.6/site-packages/ray/tune/tune.py", line 253, in run
    raise TuneError("Trials did not complete", errored_trials)
ray.tune.error.TuneError: ('Trials did not complete', [384203e9-algorithm=RAQ-seed=3478, 9fb14214-algorithm=RAQ-seed=7383, 8decac7c-algorithm=RAQ-seed=2376, 53f276a8-algorithm=RAQ-seed=6849, cd59f5d5-algorithm=RAQ-seed=867])

Any thoughts?

hartikainen commented 4 years ago

Just to confirm, does it work with git+https://github.com/hartikainen/mujoco-py.git@29fcd26290c9417aef0f82d0628d29fa0dbf0fab?

Jendker commented 4 years ago

As mentioned in this issue: https://github.com/avisingh599/reward-learning-rl/issues/19 there is an error if the installer is trying to checkout 29fcd26290c9417aef0f82d0628d29fa0dbf0fab. So I have to install newest mujoco-py from pip.

So following the recommendation from the other issue and merging the two issues: after running pip install -U mujoco-py gym I still get the same errors.

In the errors there it is mentioned:

(pid=29180) F1120 22:20:14.298504 140157146002880 core.py:90] GLEW initalization error: Missing GL version
(pid=29180) Fatal Python error: Aborted

so maybe GLEW is the problem here? But I was using mujoco with GLEW before without any issues, I have in .bashrc file.

export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libGLEW.so
avisingh599 commented 4 years ago

Can you also try unset LD_PRELOAD and give it a shot, as pointed out by Vitchyr in this issue: https://github.com/openai/mujoco-py/issues/187

Jendker commented 4 years ago

Same issue. Trials did not complete.

avisingh599 commented 4 years ago

"Trials did not complete" is a standard error that we get from ray when the program fails for any reason. To get slightly more informative error message, could you try any of the following commands:

softlearning run_example_debug examples.classifier_rl \
--n_goal_examples 10 \
--task=Image48SawyerDoorPullHookEnv-v0 \
--algorithm VICERAQ \
--n_epochs 300 \
--active_query_frequency 10

Note that I have changed run_example_local with run_example_debug, and removed --num-samples.

Jendker commented 4 years ago

Thank you for your time! Now I get:

softlearning run_example_debug examples.classifier_rl \
--n_goal_examples 10 \
--task=Image48SawyerDoorPullHookEnv-v0 \
--algorithm VICERAQ \
--n_epochs 300 \
--active_query_frequency 10
/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.25.1) or chardet (3.0.4) doesn't match a supported version!
  RequestsDependencyWarning)

WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
  * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
  * https://github.com/tensorflow/addons
If you depend on functionality not listed there, please file an issue.

WARNING: Logging before flag parsing goes to stderr.
I1121 08:06:22.704531 140455096169920 acceleratesupport.py:13] OpenGL_accelerate module loaded
I1121 08:06:22.737692 140455096169920 arraydatatype.py:270] Using accelerated ArrayDatatype
I1121 08:06:22.997218 140455096169920 __init__.py:34] MuJoCo library version is: 200
Warning: robosuite package not found. Run `pip install robosuite` to use robosuite environments.
I1121 08:06:23.037145 140455096169920 __init__.py:333] Registering multiworld mujoco gym environments
I1121 08:06:24.895683 140455096169920 __init__.py:14] Registering goal example multiworld mujoco gym environments
2019-11-21 08:06:25,012 INFO tune.py:64 -- Did not find checkpoint file in /home/jedrzej/ray_results/multiworld/mujoco/Image48SawyerDoorPullHookEnv-v0/2019-11-21T08-06-24-2019-11-21T08-06-24.
2019-11-21 08:06:25,012 INFO tune.py:211 -- Starting a new experiment.
== Status ==
Using FIFO scheduling algorithm.
Resources requested: 0/8 CPUs, 0/1 GPUs
Memory usage on this node: 6.3/16.8 GB

Using seed 2637
2019-11-21 08:06:25.021778: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-11-21 08:06:25.134831: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-11-21 08:06:25.135436: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x562d63b5a2f0 executing computations on platform CUDA. Devices:
2019-11-21 08:06:25.135452: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): GeForce GTX 1060 6GB, Compute Capability 6.1
2019-11-21 08:06:25.137166: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3998440000 Hz
2019-11-21 08:06:25.138465: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x562d65141e00 executing computations on platform Host. Devices:
2019-11-21 08:06:25.138482: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): <undefined>, <undefined>
2019-11-21 08:06:25.138656: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.759
pciBusID: 0000:01:00.0
totalMemory: 5.93GiB freeMemory: 5.32GiB
2019-11-21 08:06:25.138674: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-11-21 08:06:25.139460: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-11-21 08:06:25.139470: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990]      0
2019-11-21 08:06:25.139476: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0:   N
2019-11-21 08:06:25.139607: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5151 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1)
F1121 08:06:26.303420 140455096169920 core.py:90] GLEW initalization error: Missing GL version
Fatal Python error: Aborted

Stack (most recent call first):
  File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 841 in emit
  File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/logging/__init__.py", line 863 in handle
  File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 891 in handle
  File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/logging/__init__.py", line 1514 in callHandlers
  File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 1055 in handle
  File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/logging/__init__.py", line 1442 in _log
  File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/logging/__init__.py", line 1372 in log
  File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 1038 in log
  File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 476 in log
  File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/__init__.py", line 309 in fatal
  File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/dm_control/mujoco/wrapper/core.py", line 90 in _error_callback
  File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/multiworld/envs/mujoco/mujoco_env.py", line 152 in initialize_camera
  File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/multiworld/core/image_env.py", line 75 in __init__
  File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/multiworld/envs/mujoco/__init__.py", line 324 in create_image_48_sawyer_door_pull_hook_v0
  File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/gym/envs/registration.py", line 70 in make
  File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/gym/envs/registration.py", line 101 in make
  File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/gym/envs/registration.py", line 156 in make
  File "/home/jedrzej/GitHub/reward-learning-rl/softlearning/environments/utils.py", line 48 in get_goal_example_environment_from_variant
  File "/home/jedrzej/GitHub/reward-learning-rl/examples/classifier_rl/main.py", line 30 in _build
  File "/home/jedrzej/GitHub/reward-learning-rl/examples/development/main.py", line 77 in _train
  File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/ray/tune/trainable.py", line 151 in train
  File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/ray/actor.py", line 479 in _actor_method_call
  File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/ray/actor.py", line 138 in _remote
  File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/ray/actor.py", line 124 in remote
  File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/ray/tune/ray_trial_executor.py", line 111 in _train
  File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/ray/tune/ray_trial_executor.py", line 143 in _start_trial
  File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/ray/tune/ray_trial_executor.py", line 201 in start_trial
  File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/ray/tune/trial_runner.py", line 271 in step
  File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/ray/tune/tune.py", line 235 in run
  File "/home/jedrzej/GitHub/reward-learning-rl/examples/instrument.py", line 228 in run_example_local
  File "/home/jedrzej/GitHub/reward-learning-rl/examples/instrument.py", line 254 in run_example_debug
  File "/home/jedrzej/GitHub/reward-learning-rl/softlearning/scripts/console_scripts.py", line 81 in run_example_debug_cmd
  File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/click/core.py", line 555 in invoke
  File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/click/core.py", line 956 in invoke
  File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/click/core.py", line 1137 in invoke
  File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/click/core.py", line 717 in main
  File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/click/core.py", line 764 in __call__
  File "/home/jedrzej/GitHub/reward-learning-rl/softlearning/scripts/console_scripts.py", line 202 in main
  File "/home/jedrzej/anaconda3/envs/reward-learning-rl/bin/softlearning", line 11 in <module>
[1]    7269 abort (core dumped)  softlearning run_example_debug examples.classifier_rl --n_goal_examples 10

And the output is exactly the same either after unset LD_PRELOAD or without it.

avisingh599 commented 4 years ago

Looks like it's the GLEW/OpenGL error that people often encounter when using mujoco-py. Have you also tried using the Docker instructions? I would suggest giving them a shot (the GPU version of them) if you haven't already.

On Wed, Nov 20, 2019 at 11:12 PM Jędrzej Beniamin Orbik < notifications@github.com> wrote:

Thank you for your time! Now I get:

softlearning run_example_debug examples.classifier_rl \ --n_goal_examples 10 \ --task=Image48SawyerDoorPullHookEnv-v0 \ --algorithm VICERAQ \ --n_epochs 300 \ --active_query_frequency 10 /home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/requests/init.py:91: RequestsDependencyWarning: urllib3 (1.25.1) or chardet (3.0.4) doesn't match a supported version! RequestsDependencyWarning)

WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see:

WARNING: Logging before flag parsing goes to stderr. I1121 08:06:22.704531 140455096169920 acceleratesupport.py:13] OpenGL_accelerate module loaded I1121 08:06:22.737692 140455096169920 arraydatatype.py:270] Using accelerated ArrayDatatype I1121 08:06:22.997218 140455096169920 init.py:34] MuJoCo library version is: 200 Warning: robosuite package not found. Run pip install robosuite to use robosuite environments. I1121 08:06:23.037145 140455096169920 init.py:333] Registering multiworld mujoco gym environments I1121 08:06:24.895683 140455096169920 init.py:14] Registering goal example multiworld mujoco gym environments 2019-11-21 08:06:25,012 INFO tune.py:64 -- Did not find checkpoint file in /home/jedrzej/ray_results/multiworld/mujoco/Image48SawyerDoorPullHookEnv-v0/2019-11-21T08-06-24-2019-11-21T08-06-24. 2019-11-21 08:06:25,012 INFO tune.py:211 -- Starting a new experiment. == Status == Using FIFO scheduling algorithm. Resources requested: 0/8 CPUs, 0/1 GPUs Memory usage on this node: 6.3/16.8 GB

Using seed 2637 2019-11-21 08:06:25.021778: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2019-11-21 08:06:25.134831: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-11-21 08:06:25.135436: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x562d63b5a2f0 executing computations on platform CUDA. Devices: 2019-11-21 08:06:25.135452: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): GeForce GTX 1060 6GB, Compute Capability 6.1 2019-11-21 08:06:25.137166: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3998440000 Hz 2019-11-21 08:06:25.138465: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x562d65141e00 executing computations on platform Host. Devices: 2019-11-21 08:06:25.138482: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): , 2019-11-21 08:06:25.138656: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties: name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.759 pciBusID: 0000:01:00.0 totalMemory: 5.93GiB freeMemory: 5.32GiB 2019-11-21 08:06:25.138674: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2019-11-21 08:06:25.139460: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-11-21 08:06:25.139470: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-11-21 08:06:25.139476: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-11-21 08:06:25.139607: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5151 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1) F1121 08:06:26.303420 140455096169920 core.py:90] GLEW initalization error: Missing GL version Fatal Python error: Aborted

Stack (most recent call first): File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/init.py", line 841 in emit File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/logging/init.py", line 863 in handle File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/init.py", line 891 in handle File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/logging/init.py", line 1514 in callHandlers File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/init.py", line 1055 in handle File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/logging/init.py", line 1442 in _log File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/logging/init.py", line 1372 in log File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/init.py", line 1038 in log File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/init.py", line 476 in log File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/init.py", line 309 in fatal File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/dm_control/mujoco/wrapper/core.py", line 90 in _error_callback File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/multiworld/envs/mujoco/mujoco_env.py", line 152 in initialize_camera File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/multiworld/core/image_env.py", line 75 in init File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/multiworld/envs/mujoco/init.py", line 324 in create_image_48_sawyer_door_pull_hook_v0 File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/gym/envs/registration.py", line 70 in make File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/gym/envs/registration.py", line 101 in make File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/gym/envs/registration.py", line 156 in make File "/home/jedrzej/GitHub/reward-learning-rl/softlearning/environments/utils.py", line 48 in get_goal_example_environment_from_variant File "/home/jedrzej/GitHub/reward-learning-rl/examples/classifier_rl/main.py", line 30 in _build File "/home/jedrzej/GitHub/reward-learning-rl/examples/development/main.py", line 77 in _train File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/ray/tune/trainable.py", line 151 in train File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/ray/actor.py", line 479 in _actor_method_call File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/ray/actor.py", line 138 in _remote File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/ray/actor.py", line 124 in remote File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/ray/tune/ray_trial_executor.py", line 111 in _train File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/ray/tune/ray_trial_executor.py", line 143 in _start_trial File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/ray/tune/ray_trial_executor.py", line 201 in start_trial File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/ray/tune/trial_runner.py", line 271 in step File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/ray/tune/tune.py", line 235 in run File "/home/jedrzej/GitHub/reward-learning-rl/examples/instrument.py", line 228 in run_example_local File "/home/jedrzej/GitHub/reward-learning-rl/examples/instrument.py", line 254 in run_example_debug File "/home/jedrzej/GitHub/reward-learning-rl/softlearning/scripts/console_scripts.py", line 81 in run_example_debug_cmd File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/click/core.py", line 555 in invoke File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/click/core.py", line 956 in invoke File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/click/core.py", line 1137 in invoke File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/click/core.py", line 717 in main File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/click/core.py", line 764 in call File "/home/jedrzej/GitHub/reward-learning-rl/softlearning/scripts/console_scripts.py", line 202 in main File "/home/jedrzej/anaconda3/envs/reward-learning-rl/bin/softlearning", line 11 in [1] 7269 abort (core dumped) softlearning run_example_debug examples.classifier_rl --n_goal_examples 10

And the output is exactly the same either after unset LD_PRELOAD or without it.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/avisingh599/reward-learning-rl/issues/20?email_source=notifications&email_token=ABD35SJHDJCZCHPWI63PB4LQUYYEPA5CNFSM4JPTLLGKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEEZGTSQ#issuecomment-556952010, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABD35SPEHURBE73CHYQIAF3QUYYEPANCNFSM4JPTLLGA .

Jendker commented 4 years ago

I tried to use docker (cpu version), but on Mac unfortunately it does not work, because of some issues with OpenGL (probably again Apple lagging behind with drivers)...

But I was able to install and run the examples correctly after installing glfw:

brew install glfw

I didn't expect that it is necessary, because I never needed to have it installed when using mujoco-py before. Besides I had to remove PyOpenGL-accelerate from requirements and it works fine now, problem solved.

fromWRF commented 4 years ago

Looks like it's the GLEW/OpenGL error that people often encounter when using mujoco-py. Have you also tried using the Docker instructions? I would suggest giving them a shot (the GPU version of them) if you haven't already. On Wed, Nov 20, 2019 at 11:12 PM Jędrzej Beniamin Orbik < @.**> wrote: Thank you for your time! Now I get: softlearning run_example_debug examples.classifier_rl \ --n_goal_examples 10 \ --task=Image48SawyerDoorPullHookEnv-v0 \ --algorithm VICERAQ \ --n_epochs 300 \ --active_query_frequency 10 /home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/requests/init.py:91: RequestsDependencyWarning: urllib3 (1.25.1) or chardet (3.0.4) doesn't match a supported version! RequestsDependencyWarning) WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see: https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md * https://github.com/tensorflow/addons If you depend on functionality not listed there, please file an issue. WARNING: Logging before flag parsing goes to stderr. I1121 08:06:22.704531 140455096169920 acceleratesupport.py:13] OpenGL_accelerate module loaded I1121 08:06:22.737692 140455096169920 arraydatatype.py:270] Using accelerated ArrayDatatype I1121 08:06:22.997218 140455096169920 init.py:34] MuJoCo library version is: 200 Warning: robosuite package not found. Run pip install robosuite to use robosuite environments. I1121 08:06:23.037145 140455096169920 init.py:333] Registering multiworld mujoco gym environments I1121 08:06:24.895683 140455096169920 init.py:14] Registering goal example multiworld mujoco gym environments 2019-11-21 08:06:25,012 INFO tune.py:64 -- Did not find checkpoint file in /home/jedrzej/ray_results/multiworld/mujoco/Image48SawyerDoorPullHookEnv-v0/2019-11-21T08-06-24-2019-11-21T08-06-24. 2019-11-21 08:06:25,012 INFO tune.py:211 -- Starting a new experiment. == Status == Using FIFO scheduling algorithm. Resources requested: 0/8 CPUs, 0/1 GPUs Memory usage on this node: 6.3/16.8 GB Using seed 2637 2019-11-21 08:06:25.021778: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2019-11-21 08:06:25.134831: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-11-21 08:06:25.135436: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x562d63b5a2f0 executing computations on platform CUDA. Devices: 2019-11-21 08:06:25.135452: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): GeForce GTX 1060 6GB, Compute Capability 6.1 2019-11-21 08:06:25.137166: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3998440000 Hz 2019-11-21 08:06:25.138465: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x562d65141e00 executing computations on platform Host. Devices: 2019-11-21 08:06:25.138482: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): , 2019-11-21 08:06:25.138656: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties: name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.759 pciBusID: 0000:01:00.0 totalMemory: 5.93GiB freeMemory: 5.32GiB 2019-11-21 08:06:25.138674: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2019-11-21 08:06:25.139460: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-11-21 08:06:25.139470: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-11-21 08:06:25.139476: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-11-21 08:06:25.139607: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5151 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1) F1121 08:06:26.303420 140455096169920 core.py:90] GLEW initalization error: Missing GL version Fatal Python error: Aborted Stack (most recent call first): File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/init.py", line 841 in emit File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/logging/init.py", line 863 in handle File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/init.py", line 891 in handle File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/logging/init.py", line 1514 in callHandlers File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/init.py", line 1055 in handle File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/logging/init.py", line 1442 in _log File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/logging/init.py", line 1372 in log File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/init.py", line 1038 in log File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/init.py", line 476 in log File "/home/jedrzej/.local/lib/python3.6/site-packages/absl/logging/init.py", line 309 in fatal File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/dm_control/mujoco/wrapper/core.py", line 90 in _error_callback File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/multiworld/envs/mujoco/mujoco_env.py", line 152 in initialize_camera File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/multiworld/core/image_env.py", line 75 in init File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/multiworld/envs/mujoco/init.py", line 324 in create_image_48_sawyer_door_pull_hook_v0 File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/gym/envs/registration.py", line 70 in make File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/gym/envs/registration.py", line 101 in make File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/gym/envs/registration.py", line 156 in make File "/home/jedrzej/GitHub/reward-learning-rl/softlearning/environments/utils.py", line 48 in get_goal_example_environment_from_variant File "/home/jedrzej/GitHub/reward-learning-rl/examples/classifier_rl/main.py", line 30 in _build File "/home/jedrzej/GitHub/reward-learning-rl/examples/development/main.py", line 77 in _train File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/ray/tune/trainable.py", line 151 in train File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/ray/actor.py", line 479 in _actor_method_call File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/ray/actor.py", line 138 in _remote File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/ray/actor.py", line 124 in remote File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/ray/tune/ray_trial_executor.py", line 111 in _train File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/ray/tune/ray_trial_executor.py", line 143 in _start_trial File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/ray/tune/ray_trial_executor.py", line 201 in start_trial File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/ray/tune/trial_runner.py", line 271 in step File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/ray/tune/tune.py", line 235 in run File "/home/jedrzej/GitHub/reward-learning-rl/examples/instrument.py", line 228 in run_example_local File "/home/jedrzej/GitHub/reward-learning-rl/examples/instrument.py", line 254 in run_example_debug File "/home/jedrzej/GitHub/reward-learning-rl/softlearning/scripts/console_scripts.py", line 81 in run_example_debug_cmd File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/click/core.py", line 555 in invoke File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/click/core.py", line 956 in invoke File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/click/core.py", line 1137 in invoke File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/click/core.py", line 717 in main File "/home/jedrzej/anaconda3/envs/reward-learning-rl/lib/python3.6/site-packages/click/core.py", line 764 in call File "/home/jedrzej/GitHub/reward-learning-rl/softlearning/scripts/console_scripts.py", line 202 in main File "/home/jedrzej/anaconda3/envs/reward-learning-rl/bin/softlearning", line 11 in [1] 7269 abort (core dumped) softlearning run_example_debug examples.classifier_rl --n_goal_examples 10 And the output is exactly the same either after unset LD_PRELOAD or without it. — You are receiving this because you commented. Reply to this email directly, view it on GitHub <#20?email_source=notifications&email_token=ABD35SJHDJCZCHPWI63PB4LQUYYEPA5CNFSM4JPTLLGKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEEZGTSQ#issuecomment-556952010>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABD35SPEHURBE73CHYQIAF3QUYYEPANCNFSM4JPTLLGA .

I use the Docker to install, but I meet the same problem: fatal: reference is not a tree: 29fcd26290c9417aef0f82d0628d29fa0dbf0fab