Unity-Technologies / ml-agents

The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.
https://unity.com/products/machine-learning-agents
Other
17.18k stars 4.16k forks source link

No module named 'mlagents_envs' #1967

Closed BhaskarTrivedi closed 5 years ago

BhaskarTrivedi commented 5 years ago

I am trying to run obstacle jupyter notebook and facing no mlagent_envs module in anaconda3 venv.

ModuleNotFoundError Traceback (most recent call last)

in ----> 1 from obstacle_tower_env import ObstacleTowerEnv 2 get_ipython().run_line_magic('matplotlib', 'inline') 3 from matplotlib import pyplot as plt ~/obstacle-tower-env/obstacle_tower_env.py in 4 import gym 5 import numpy as np ----> 6 from mlagents_envs import UnityEnvironment 7 from gym import error, spaces 8 import os ModuleNotFoundError: No module named 'mlagents_envs' mlagents 0.6.2 /home/bhaskartrivedi/ml-agents/ml-agents mlagents-envs 0.6.2 /home/bhaskartrivedi/ml-agents/ml-agents-envs Installation step modified setup.py with version 0.6.2 as obstacle-tower support version 0.6.2 in ml-agent-env and ml-agent subfoleder used pip install -e ./ to install ml-agent and ml-agent env pip list output mlagents 0.6.2 /home/user/ml-agents/ml-agents mlagents-envs 0.6.2 /home/user/ml-agents/ml-agents-envs more-itertools 7.0.0 nbconvert 5.4.1 nbformat 4.4.0 notebook 5.7.8 numpy 1.14.5 obstacle-tower-env 1.3 /home/user/obstacle-tower-env Able to run ml-learn command mlagents-learn --help ▄▄▄▓▓▓▓ ╓▓▓▓▓▓▓█▓▓▓▓▓ ,▄▄▄m▀▀▀' ,▓▓▓▀▓▓▄ ▓▓▓ ▓▓▌ ▄▓▓▓▀' ▄▓▓▀ ▓▓▓ ▄▄ ▄▄ ,▄▄ ▄▄▄▄ ,▄▄ ▄▓▓▌▄ ▄▄▄ ,▄▄ ▄▓▓▓▀ ▄▓▓▀ ▐▓▓▌ ▓▓▌ ▐▓▓ ▐▓▓▓▀▀▀▓▓▌ ▓▓▓ ▀▓▓▌▀ ^▓▓▌ ╒▓▓▌ ▄▓▓▓▓▓▄▄▄▄▄▄▄▄▓▓▓ ▓▀ ▓▓▌ ▐▓▓ ▐▓▓ ▓▓▓ ▓▓▓ ▓▓▌ ▐▓▓▄ ▓▓▌ ▀▓▓▓▓▀▀▀▀▀▀▀▀▀▀▓▓▄ ▓▓ ▓▓▌ ▐▓▓ ▐▓▓ ▓▓▓ ▓▓▓ ▓▓▌ ▐▓▓▐▓▓ ^█▓▓▓ ▀▓▓▄ ▐▓▓▌ ▓▓▓▓▄▓▓▓▓ ▐▓▓ ▓▓▓ ▓▓▓ ▓▓▓▄ ▓▓▓▓` '▀▓▓▓▄ ^▓▓▓ ▓▓▓ └▀▀▀▀ ▀▀ ^▀▀ `▀▀ `▀▀ '▀▀ ▐▓▓▌ ▀▀▀▀▓▄▄▄ ▓▓▓▓▓▓, ▓▓▓▓▀ `▀█▓▓▓▓▓▓▓▓▓▌ ¬`▀▀▀█▓ Usage: mlagents-learn [options] mlagents-learn --help Options: --env= Name of the Unity executable [default: None]. --curriculum= Curriculum json directory for environment [default: None]. --keep-checkpoints= How many model checkpoints to keep [default: 5]. --lesson= Start learning from this lesson [default: 0]. --load Whether to load the model or randomly initialize [default: False]. --run-id= The directory name for model and summary statistics [default: ppo]. --num-runs= Number of concurrent training sessions [default: 1]. --save-freq= Frequency at which to save model [default: 50000]. --seed= Random seed used for training [default: -1]. --slow Whether to run the game at training speed [default: False]. --train Whether to train model, or only run inference [default: False]. --base-port= Base port for environment communication [default: 5005]. --num-envs= Number of parallel environments to use for training [default: 1] --docker-target-name=
Docker volume to store training-specific files [default: None]. --no-graphics Whether to run the environment in no-graphics mode [default: False]. --debug Whether to run ML-Agents in debug mode with detailed logging [default: False]. Also able to start training with 3dball game mlagents-learn config/trainer_config.yaml --run-id=firstRun --train ▄▄▄▓▓▓▓ ╓▓▓▓▓▓▓█▓▓▓▓▓ ,▄▄▄m▀▀▀' ,▓▓▓▀▓▓▄ ▓▓▓ ▓▓▌ ▄▓▓▓▀' ▄▓▓▀ ▓▓▓ ▄▄ ▄▄ ,▄▄ ▄▄▄▄ ,▄▄ ▄▓▓▌▄ ▄▄▄ ,▄▄ ▄▓▓▓▀ ▄▓▓▀ ▐▓▓▌ ▓▓▌ ▐▓▓ ▐▓▓▓▀▀▀▓▓▌ ▓▓▓ ▀▓▓▌▀ ^▓▓▌ ╒▓▓▌ ▄▓▓▓▓▓▄▄▄▄▄▄▄▄▓▓▓ ▓▀ ▓▓▌ ▐▓▓ ▐▓▓ ▓▓▓ ▓▓▓ ▓▓▌ ▐▓▓▄ ▓▓▌ ▀▓▓▓▓▀▀▀▀▀▀▀▀▀▀▓▓▄ ▓▓ ▓▓▌ ▐▓▓ ▐▓▓ ▓▓▓ ▓▓▓ ▓▓▌ ▐▓▓▐▓▓ ^█▓▓▓ ▀▓▓▄ ▐▓▓▌ ▓▓▓▓▄▓▓▓▓ ▐▓▓ ▓▓▓ ▓▓▓ ▓▓▓▄ ▓▓▓▓` '▀▓▓▓▄ ^▓▓▓ ▓▓▓ └▀▀▀▀ ▀▀ ^▀▀ `▀▀ `▀▀ '▀▀ ▐▓▓▌ ▀▀▀▀▓▄▄▄ ▓▓▓▓▓▓, ▓▓▓▓▀ `▀█▓▓▓▓▓▓▓▓▓▌ ¬`▀▀▀█▓ INFO:mlagents.trainers:{'--base-port': '5005', '--curriculum': 'None', '--debug': False, '--docker-target-name': 'None', '--env': 'None', '--help': False, '--keep-checkpoints': '5', '--lesson': '0', '--load': False, '--no-graphics': False, '--num-envs': '1', '--num-runs': '1', '--run-id': 'firstRun', '--save-freq': '50000', '--seed': '-1', '--slow': False, '--train': True, '': 'config/trainer_config.yaml'} /home/bhaskartrivedi/ml-agents/ml-agents/mlagents/trainers/learn.py:141: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details. trainer_config = yaml.load(data_file) INFO:mlagents.envs:Start training by pressing the Play button in the Unity Editor. INFO:mlagents.envs: 'Ball3DAcademy' started successfully! Unity Academy name: Ball3DAcademy Number of Brains: 1 Number of Training Brains : 1 Reset Parameters : Unity brain name: 3DBallLearning Number of Visual Observations (per agent): 0 Vector Observation space size (per agent): 8 Number of stacked Vector Observation: 1 Vector Action space type: continuous Vector Action space size (per agent): [2] Vector Action descriptions: , 2019-04-20 17:26:22.543753: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA INFO:mlagents.envs:Hyperparameters for the PPO Trainer of brain 3DBallLearning: batch_size: 64 beta: 0.001 buffer_size: 12000 epsilon: 0.2 gamma: 0.995 hidden_units: 128 lambd: 0.99 learning_rate: 0.0003 max_steps: 5.0e4 normalize: True num_epoch: 3 num_layers: 2 time_horizon: 1000 sequence_length: 64 summary_freq: 1000 use_recurrent: False summary_path: ./summaries/firstRun-0_3DBallLearning memory_size: 256 use_curiosity: True curiosity_strength: 0.01 curiosity_enc_size: 128 model_path: ./models/firstRun-0/3DBallLearning INFO:mlagents.trainers: firstRun-0: 3DBallLearning: Step: 1000. Time Elapsed: 21.638 s Mean Reward: 1.203. Std of Reward: 0.668. Training.
vincentpierre commented 5 years ago

I think this issue is better suited on this repository : https://github.com/Unity-Technologies/obstacle-tower-env I think you do not have the mlagents_envs that obstacle tower is expecting. I think you should try to install into a separate virtual environment and then do

$ git clone git@github.com:Unity-Technologies/obstacle-tower-env.git
$ cd obstacle-tower-env
$ pip install -e .
xiaomaogy commented 5 years ago

Thanks for reaching out to us. Hopefully you were able to resolve your issue. We are closing this due to inactivity, but if you need additional assistance, feel free to reopen the issue.

lock[bot] commented 4 years ago

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.