ray-project / ray

Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
https://ray.io
Apache License 2.0
33.01k stars 5.59k forks source link

[rllib] "AttributeError: 'numpy.ndarray' object has no attribute 'items'" on certain turn-based MultiAgentEnvs with Dict obs space. #17706

Closed akshaygh0sh closed 3 years ago

akshaygh0sh commented 3 years ago

What is the problem?

When trying to train with a DQN (or really any other algorithm - I even tried it with PPO) with the latest nightly installation of Ray , I seem to get a weird error thrown by the DictFlatteningPreprocessor in the write() function over here. This error does NOT get thrown on Ray versions 1.4.0 and below (I tested it with 1.4.0 and 1.3.0 and the training loop executed fine).

As for reproduction, since I am using a custom environment that I coded entirely from scratch and has a decent amount of dependencies, I tried to reproduce the error using an environment similar to mine. For the reproduction code to run, you will need PettingZoo (pip install pettingzoo) as well as the classic environments (pip install pettingzoo[classic]), the latest nightly version of ray and some version of PyTorch.

I suspect that this error can probably be reproduced with any multi-agent environment that has a Dict observation space (i.e. one with "action_mask" and "observation" keys per agent) since the error is being thrown in the DictFlatteningPreprocessor.

Ray version and other system information (Python version, TensorFlow version, OS): Ray - latest nightly (as of 8/10/2021 @ 9:40 a.m. PST) TensorFlow - 1.15.0 PyTorch - 1.9.0+cu111 PettingZoo - 1.11.0 Python - 3.7.11 OS - Windows 10

Reproduction (REQUIRED)

Please provide a short code snippet (less than 50 lines if possible) that can be copy-pasted to reproduce the issue. The snippet should have no external library dependencies (i.e., use fake or mock data / environments):

from pettingzoo.classic import chess_v4
from pettingzoo.test import api_test
from pettingzoo.test import performance_benchmark
from ray.rllib.env import PettingZooEnv

from gym import spaces
import torch

import ray
from ray import tune
from ray.rllib.agents.dqn.dqn_torch_model import DQNTorchModel
from ray.rllib.models.torch.fcnet import FullyConnectedNetwork
from ray.rllib.utils.torch_ops import FLOAT_MIN, FLOAT_MAX
from ray.rllib.agents import dqn

from ray.rllib.models import ModelCatalog
from ray.tune.registry import register_env

import os

def env_creator():
    env = chess_v4.env()
    return env

# Test the environment to make sure it's compatible with PettingZoo and therefore RLLIB
env = chess_v4.env()
# api_test(env, num_cycles = 50, verbose_progress = True)
# performance_benchmark(env)

class ChessNetwork (DQNTorchModel):
    def __init__ (self, obs_space, action_space, num_outputs, model_config, name, **kwargs):

        DQNTorchModel.__init__(self, obs_space, action_space, num_outputs, model_config, name, **kwargs)

        action_embed_size = 4672
        self.action_embed_model = FullyConnectedNetwork(
            spaces.Box(low=-1, high=500, shape=(8, 8, 111)), action_space, action_embed_size,
            model_config, name + "action_embed")

    def forward(self, input_dict, state, seq_lens):
        action_mask = input_dict["obs"]["action_mask"]

        action_logits, _ = self.action_embed_model({
            "obs": input_dict["obs"]['observation']
        })

        # Masks out invalid actions
        inf_mask = torch.clamp(torch.log(action_mask), FLOAT_MIN, FLOAT_MAX)

        return action_logits + inf_mask, state

    def value_function(self):
        return self.action_embed_model.value_function()

# Register custom environment and custom network
ModelCatalog.register_custom_model("ChessNetwork", ChessNetwork)
register_env("ChessEnv", lambda config : PettingZooEnv(env_creator()))

config = dqn.DEFAULT_CONFIG.copy()
test_env = PettingZooEnv(env_creator())
obs_space = test_env.observation_space
act_space = test_env.action_space

config["multiagent"] = {
    "policies": {
        "player_0": (None, obs_space, act_space, {}),
        "player_1": (None, obs_space, act_space, {}),
    },
    "policy_mapping_fn": lambda agent_id: agent_id
}

config["num_gpus"] = 1
config["framework"] = "torch"
config["model"] = {
    "custom_model" : "ChessNetwork",
}
config["env"] = "ChessEnv"
config["horizon"] = 150
config["log_level"] = "INFO"
config["dueling"] = False
config["hiddens"] = []

config["train_batch_size"] = 200
config["rollout_fragment_length"] = 40

ray.init(ignore_reinit_error=True, object_store_memory = 4294967296) # Limit to 4 GB

tune.run(
    "DQN",
    name="Chess_Policy_PZ",
    stop={"episodes_total" : 1000},
    checkpoint_freq=250,
    checkpoint_at_end = True,
    config=config,
    local_dir = os.getcwd()
)

ray.shutdown()
Errors Thrown When Running This Code:

2021-08-10 10:05:20,894 ERROR syncer.py:72 -- Log sync requires rsync to be installed.
(pid=15552) 2021-08-10 10:05:26,052 INFO dqn.py:188 -- In multi-agent mode, policies will be optimized sequentially by the multi-GPU optimizer. Consider setting simple_optimizer=True if this doesn't work for you.
(pid=15552) 2021-08-10 10:05:27,241 INFO catalog.py:412 -- Wrapping <class '__main__.ChessNetwork'> as <class 'ray.rllib.agents.dqn.dqn_torch_model.DQNTorchModel'>
(pid=15552) 2021-08-10 10:05:27,296 INFO catalog.py:412 -- Wrapping <class '__main__.ChessNetwork'> as <class 'ray.rllib.agents.dqn.dqn_torch_model.DQNTorchModel'>
(pid=15552) 2021-08-10 10:05:27,345 INFO torch_policy.py:170 -- TorchPolicy (worker=local) running on 1 GPU(s).
(pid=15552) 2021-08-10 10:05:29,698 INFO catalog.py:412 -- Wrapping <class '__main__.ChessNetwork'> as <class 'ray.rllib.agents.dqn.dqn_torch_model.DQNTorchModel'>
(pid=15552) 2021-08-10 10:05:29,746 INFO catalog.py:412 -- Wrapping <class '__main__.ChessNetwork'> as <class 'ray.rllib.agents.dqn.dqn_torch_model.DQNTorchModel'>
(pid=15552) 2021-08-10 10:05:29,784 INFO torch_policy.py:170 -- TorchPolicy (worker=local) running on 1 GPU(s).
(pid=15552) 2021-08-10 10:05:29,892 INFO rollout_worker.py:1379 -- Built policy map: {}
(pid=15552) 2021-08-10 10:05:29,892 INFO rollout_worker.py:1380 -- Built preprocessor map: {'player_0': <ray.rllib.models.preprocessors.DictFlatteningPreprocessor object at 0x00000234B0818588>, 'player_1': <ray.rllib.models.preprocessors.DictFlatteningPreprocessor object at 0x00000234B0835A08>}
(pid=15552) 2021-08-10 10:05:29,892 INFO rollout_worker.py:611 -- Built filter map: {'player_0': <ray.rllib.utils.filter.NoFilter object at 0x00000234B0818548>, 'player_1': <ray.rllib.utils.filter.NoFilter object at 0x00000234B0916608>}
(pid=15552) 2021-08-10 10:05:29,892 WARNING util.py:55 -- Install gputil for GPU system monitoring.
(pid=15552) 2021-08-10 10:05:29,892 INFO rollout_worker.py:742 -- Generating sample batch of size 40
(pid=15552) 2021-08-10 10:05:29,907 INFO sampler.py:592 -- Raw obs from env: { 0: { 'player_0': { 'action_mask': np.ndarray((4672,), dtype=int32, min=0.0, max=1.0, mean=0.004),
(pid=15552)                      'observation': np.ndarray((8, 8, 111), dtype=bool, min=0.0, max=1.0, mean=0.045)}}}
(pid=15552) 2021-08-10 10:05:29,907 INFO sampler.py:593 -- Info return from env: {0: {}}
(pid=15552) 2021-08-10 10:05:29,907 WARNING deprecation.py:39 -- DeprecationWarning: `policy_mapping_fn(agent_id)` has been deprecated. Use `policy_mapping_fn(agent_id, episode, **kwargs)` instead. This will raise an error in the future!
(pid=15552) 2021-08-10 10:05:29,907 INFO sampler.py:821 -- Preprocessed obs: np.ndarray((11776,), dtype=float32, min=0.0, max=1.0, mean=0.029)
(pid=15552) 2021-08-10 10:05:29,907 INFO sampler.py:825 -- Filtered obs: np.ndarray((11776,), dtype=float32, min=0.0, max=1.0, mean=0.029)
(pid=15552) 2021-08-10 10:05:29,907 INFO sampler.py:1012 -- Inputs to compute_actions():
(pid=15552) 
(pid=15552) { 'player_0': [ { 'data': { 'agent_id': 'player_0',
(pid=15552)                             'env_id': 0,
(pid=15552)                             'info': {},
(pid=15552)                             'obs': np.ndarray((11776,), dtype=float32, min=0.0, max=1.0, mean=0.029),
(pid=15552)                             'prev_action': None,
(pid=15552)                             'prev_reward': 0.0,
(pid=15552)                             'rnn_state': None},
(pid=15552)                   'type': 'PolicyEvalData'}]}
(pid=15552) 
(pid=15552) C:\Users\408aa\Anaconda3\envs\rl_env\lib\site-packages\numpy\core\_methods.py:179: RuntimeWarning: overflow encountered in reduce
(pid=15552)   ret = umr_sum(arr, axis, dtype, out, keepdims, where=where)
(pid=15552) 2021-08-10 10:05:30,011 INFO sampler.py:1033 -- Outputs of compute_actions():
(pid=15552) 
(pid=15552) { 'player_0': ( np.ndarray((1,), dtype=int64, min=1245.0, max=1245.0, mean=1245.0),
(pid=15552)                 [],
(pid=15552)                 { 'action_dist_inputs': np.ndarray((1, 4672), dtype=float32, min=-3.3999999521443642e+38, max=0.004, mean=-inf),
(pid=15552)                   'action_logp': np.ndarray((1,), dtype=float32, min=0.0, max=0.0, mean=0.0),
(pid=15552)                   'action_prob': np.ndarray((1,), dtype=float32, min=1.0, max=1.0, mean=1.0),
(pid=15552)                   'q_values': np.ndarray((1, 4672), dtype=float32, min=-3.3999999521443642e+38, max=0.004, mean=-inf)})}
(pid=15552) 
(pid=15552) 2021-08-10 10:05:30,166 INFO simple_list_collector.py:659 -- Trajectory fragment after postprocess_trajectory():
(pid=15552) 
(pid=15552) { 'player_0': { 'actions': np.ndarray((20,), dtype=int64, min=77.0, max=4311.0, mean=1898.75),
(pid=15552)                 'agent_index': np.ndarray((20,), dtype=int32, min=0.0, max=0.0, mean=0.0),
(pid=15552)                 'dones': np.ndarray((20,), dtype=float32, min=0.0, max=0.0, mean=0.0),
(pid=15552)                 'eps_id': np.ndarray((20,), dtype=int32, min=77635403.0, max=77635403.0, mean=77635403.0),
(pid=15552)                 'infos': np.ndarray((20,), dtype=object, head={}),
(pid=15552)                 'new_obs': np.ndarray((20, 11776), dtype=float32, min=0.0, max=1.0, mean=0.042),
(pid=15552)                 'obs': np.ndarray((20, 11776), dtype=float32, min=0.0, max=1.0, mean=0.042),
(pid=15552)                 'rewards': np.ndarray((20,), dtype=float32, min=0.0, max=0.0, mean=0.0),
(pid=15552)                 'unroll_id': np.ndarray((20,), dtype=int32, min=0.0, max=0.0, mean=0.0),
(pid=15552)                 'weights': np.ndarray((20,), dtype=float32, min=1.0, max=1.0, mean=1.0)},
(pid=15552)   'player_1': { 'actions': np.ndarray((19,), dtype=int64, min=77.0, max=4290.0, mean=2398.579),
(pid=15552)                 'agent_index': np.ndarray((19,), dtype=int32, min=1.0, max=1.0, mean=1.0),
(pid=15552)                 'dones': np.ndarray((19,), dtype=float32, min=0.0, max=0.0, mean=0.0),
(pid=15552)                 'eps_id': np.ndarray((19,), dtype=int32, min=77635403.0, max=77635403.0, mean=77635403.0),
(pid=15552)                 'infos': np.ndarray((19,), dtype=object, head={}),
(pid=15552)                 'new_obs': np.ndarray((19, 11776), dtype=float32, min=0.0, max=1.0, mean=0.048),
(pid=15552)                 'obs': np.ndarray((19, 11776), dtype=float32, min=0.0, max=1.0, mean=0.048),
(pid=15552)                 'rewards': np.ndarray((19,), dtype=float32, min=0.0, max=0.0, mean=0.0),
(pid=15552)                 'unroll_id': np.ndarray((19,), dtype=int32, min=1.0, max=1.0, mean=1.0),
(pid=15552)                 'weights': np.ndarray((19,), dtype=float32, min=1.0, max=1.0, mean=1.0)}}
(pid=15552) 
(pid=15552) 2021-08-10 10:05:30,166 INFO rollout_worker.py:780 -- Completed sample batch:
(pid=15552) 
(pid=15552) { 'count': 40,
(pid=15552)   'policy_batches': { 'player_0': { 'actions': np.ndarray((20,), dtype=int64, min=77.0, max=4311.0, mean=1898.75),
(pid=15552)                                     'agent_index': np.ndarray((20,), dtype=int32, min=0.0, max=0.0, mean=0.0),
(pid=15552)                                     'dones': np.ndarray((20,), dtype=float32, min=0.0, max=0.0, mean=0.0),
(pid=15552)                                     'eps_id': np.ndarray((20,), dtype=int32, min=77635403.0, max=77635403.0, mean=77635403.0),
(pid=15552)                                     'infos': np.ndarray((20,), dtype=object, head={}),
(pid=15552)                                     'new_obs': np.ndarray((20, 11776), dtype=float32, min=0.0, max=1.0, mean=0.042),
(pid=15552)                                     'obs': np.ndarray((20, 11776), dtype=float32, min=0.0, max=1.0, mean=0.042),
(pid=15552)                                     'rewards': np.ndarray((20,), dtype=float32, min=0.0, max=0.0, mean=0.0),
(pid=15552)                                     'unroll_id': np.ndarray((20,), dtype=int32, min=0.0, max=0.0, mean=0.0),
(pid=15552)                                     'weights': np.ndarray((20,), dtype=float32, min=1.0, max=1.0, mean=1.0)},
(pid=15552)                       'player_1': { 'actions': np.ndarray((19,), dtype=int64, min=77.0, max=4290.0, mean=2398.579),
(pid=15552)                                     'agent_index': np.ndarray((19,), dtype=int32, min=1.0, max=1.0, mean=1.0),
(pid=15552)                                     'dones': np.ndarray((19,), dtype=float32, min=0.0, max=0.0, mean=0.0),
(pid=15552)                                     'eps_id': np.ndarray((19,), dtype=int32, min=77635403.0, max=77635403.0, mean=77635403.0),
(pid=15552)                                     'infos': np.ndarray((19,), dtype=object, head={}),
(pid=15552)                                     'new_obs': np.ndarray((19, 11776), dtype=float32, min=0.0, max=1.0, mean=0.048),
(pid=15552)                                     'obs': np.ndarray((19, 11776), dtype=float32, min=0.0, max=1.0, mean=0.048),
(pid=15552)                                     'rewards': np.ndarray((19,), dtype=float32, min=0.0, max=0.0, mean=0.0),
(pid=15552)                                     'unroll_id': np.ndarray((19,), dtype=int32, min=1.0, max=1.0, mean=1.0),
(pid=15552)                                     'weights': np.ndarray((19,), dtype=float32, min=1.0, max=1.0, mean=1.0)}},
(pid=15552)   'type': 'MultiAgentBatch'}
(pid=15552) 
(pid=15552) 2021-08-10 10:05:30,193 WARNING replay_buffer.py:44 -- Estimated max memory usage for replay buffer is 4.7124 GB (50000.0 batches of size 1, 94248 bytes each), available system memory is 16.55656448 GB
2021-08-10 10:05:31,139 ERROR trial_runner.py:773 -- Trial DQN_ChessEnv_24cdc_00000: Error processing event.
Traceback (most recent call last):
  File "C:\Users\408aa\Anaconda3\envs\rl_env\lib\site-packages\ray\tune\trial_runner.py", line 739, in _process_trial
    results = self.trial_executor.fetch_result(trial)
  File "C:\Users\408aa\Anaconda3\envs\rl_env\lib\site-packages\ray\tune\ray_trial_executor.py", line 746, in fetch_result
    result = ray.get(trial_future[0], timeout=DEFAULT_GET_TIMEOUT)
  File "C:\Users\408aa\Anaconda3\envs\rl_env\lib\site-packages\ray\_private\client_mode_hook.py", line 82, in wrapper
    return func(*args, **kwargs)
  File "C:\Users\408aa\Anaconda3\envs\rl_env\lib\site-packages\ray\worker.py", line 1621, in get
    raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(AttributeError): ray::DQN.train() (pid=15552, ip=10.0.0.37, repr=DQN)
  File "python\ray\_raylet.pyx", line 536, in ray._raylet.execute_task
  File "python\ray\_raylet.pyx", line 486, in ray._raylet.execute_task.function_executor
  File "C:\Users\408aa\Anaconda3\envs\rl_env\lib\site-packages\ray\_private\function_manager.py", line 563, in actor_method_executor
    return method(__ray_actor, *args, **kwargs)
  File "C:\Users\408aa\Anaconda3\envs\rl_env\lib\site-packages\ray\rllib\agents\trainer.py", line 651, in train
    raise e
  File "C:\Users\408aa\Anaconda3\envs\rl_env\lib\site-packages\ray\rllib\agents\trainer.py", line 637, in train
    result = Trainable.train(self)
  File "C:\Users\408aa\Anaconda3\envs\rl_env\lib\site-packages\ray\tune\trainable.py", line 237, in train
    result = self.step()
  File "C:\Users\408aa\Anaconda3\envs\rl_env\lib\site-packages\ray\rllib\agents\trainer_template.py", line 193, in step
    res = next(self.train_exec_impl)
  File "C:\Users\408aa\Anaconda3\envs\rl_env\lib\site-packages\ray\util\iter.py", line 756, in __next__
    return next(self.built_iterator)
  File "C:\Users\408aa\Anaconda3\envs\rl_env\lib\site-packages\ray\util\iter.py", line 783, in apply_foreach
    for item in it:
  File "C:\Users\408aa\Anaconda3\envs\rl_env\lib\site-packages\ray\util\iter.py", line 843, in apply_filter
    for item in it:
  File "C:\Users\408aa\Anaconda3\envs\rl_env\lib\site-packages\ray\util\iter.py", line 843, in apply_filter
    for item in it:
  File "C:\Users\408aa\Anaconda3\envs\rl_env\lib\site-packages\ray\util\iter.py", line 783, in apply_foreach
    for item in it:
  File "C:\Users\408aa\Anaconda3\envs\rl_env\lib\site-packages\ray\util\iter.py", line 843, in apply_filter
    for item in it:
  File "C:\Users\408aa\Anaconda3\envs\rl_env\lib\site-packages\ray\util\iter.py", line 1075, in build_union
    item = next(it)
  File "C:\Users\408aa\Anaconda3\envs\rl_env\lib\site-packages\ray\util\iter.py", line 756, in __next__
    return next(self.built_iterator)
  File "C:\Users\408aa\Anaconda3\envs\rl_env\lib\site-packages\ray\util\iter.py", line 783, in apply_foreach
    for item in it:
  File "C:\Users\408aa\Anaconda3\envs\rl_env\lib\site-packages\ray\util\iter.py", line 783, in apply_foreach
    for item in it:
  File "C:\Users\408aa\Anaconda3\envs\rl_env\lib\site-packages\ray\util\iter.py", line 783, in apply_foreach
    for item in it:
  File "C:\Users\408aa\Anaconda3\envs\rl_env\lib\site-packages\ray\rllib\execution\rollout_ops.py", line 75, in sampler
    yield workers.local_worker().sample()
  File "C:\Users\408aa\Anaconda3\envs\rl_env\lib\site-packages\ray\rllib\evaluation\rollout_worker.py", line 744, in sample
    batches = [self.input_reader.next()]
  File "C:\Users\408aa\Anaconda3\envs\rl_env\lib\site-packages\ray\rllib\evaluation\sampler.py", line 100, in next
    batches = [self.get_data()]
  File "C:\Users\408aa\Anaconda3\envs\rl_env\lib\site-packages\ray\rllib\evaluation\sampler.py", line 230, in get_data
    item = next(self.rollout_provider)
  File "C:\Users\408aa\Anaconda3\envs\rl_env\lib\site-packages\ray\rllib\evaluation\sampler.py", line 614, in _env_runner
    sample_collector=sample_collector,
  File "C:\Users\408aa\Anaconda3\envs\rl_env\lib\site-packages\ray\rllib\evaluation\sampler.py", line 819, in _process_observations
    policy_id).transform(raw_obs)
  File "C:\Users\408aa\Anaconda3\envs\rl_env\lib\site-packages\ray\rllib\models\preprocessors.py", line 259, in transform
    self.write(observation, array, 0)
  File "C:\Users\408aa\Anaconda3\envs\rl_env\lib\site-packages\ray\rllib\models\preprocessors.py", line 266, in write
    observation = OrderedDict(sorted(observation.items()))
AttributeError: 'numpy.ndarray' object has no attribute 'items'
Result for DQN_ChessEnv_24cdc_00000:
  {}

If the code snippet cannot be run by itself, the issue will be closed with "needs-repro-script".

akshaygh0sh commented 3 years ago

Mentioned in #17568.

sven1977 commented 3 years ago

Thanks for the bug report. The above PR should fix it (confirmed it's working on my Mac with your above script). We'll get this fix into 1.6.