ray-project / ray

Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
https://ray.io
Apache License 2.0
33.05k stars 5.59k forks source link

[RLlib] Error in executing demo example with attention net. #39769

Open AvisP opened 12 months ago

AvisP commented 12 months ago

What happened + What you expected to happen

I tried to run a demo example with attention net using PPO algorithm with RepeatAfterMeEnv and it is giving an error on first iteration of execution. The log information I got is

Failure # 1 (occurred at 2023-09-20_14-15-10)
The actor died because of an error raised in its creation task, ray::PPO.__init__() (pid=46330, ip=127.0.0.1, actor_id=e38188a850a22e98ba75747301000000, repr=PPO)
  File "....../lib/python3.11/site-packages/ray/rllib/evaluation/worker_set.py", line 227, in _setup
    self.add_workers(
  File "...../lib/python3.11/site-packages/ray/rllib/evaluation/worker_set.py", line 593, in add_workers
    raise result.get()
  File "...../lib/python3.11/site-packages/ray/rllib/utils/actor_manager.py", line 481, in __fetch_result
    result = ray.get(r)
             ^^^^^^^^^^
           ^^^^^^^^^^^^^^^^^^^
           ^^^^^^^^^^^^^^^^^^^^^
ray.exceptions.RayActorError: The actor died because of an error raised in its creation task, ray::RolloutWorker.__init__() (pid=46333, ip=127.0.0.1, actor_id=8e0ace3b74f2a1b3809a668601000000, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x104703350>)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "...../lib/python3.11/site-packages/ray/rllib/evaluation/rollout_worker.py", line 525, in __init__
    self._update_policy_map(policy_dict=self.policy_dict)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "...../lib/python3.11/site-packages/ray/rllib/evaluation/rollout_worker.py", line 1727, in _update_policy_map
    self._build_policy_map(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "...../lib/python3.11/site-packages/ray/rllib/evaluation/rollout_worker.py", line 1838, in _build_policy_map
    new_policy = create_policy_for_framework(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "...../lib/python3.11/site-packages/ray/rllib/utils/policy.py", line 142, in create_policy_for_framework
    return policy_class(observation_space, action_space, merged_config)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "...../lib/python3.11/site-packages/ray/rllib/algorithms/ppo/ppo_torch_policy.py", line 49, in __init__
    TorchPolicyV2.__init__(
  File "...../lib/python3.11/site-packages/ray/rllib/policy/torch_policy_v2.py", line 92, in __init__
    model = self.make_rl_module()
            ^^^^^^^^^^^^^^^^^^^^^
  File "...../lib/python3.11/site-packages/ray/rllib/policy/policy.py", line 424, in make_rl_module
    marl_module = marl_spec.build()
                  ^^^^^^^^^^^^^^^^^
  File "...../lib/python3.11/site-packages/ray/rllib/core/rl_module/marl_module.py", line 462, in build
    module = self.marl_module_class(module_config)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 315, in new_init
    previous_init(self, *args, **kwargs)
  File "...../lib/python3.11/site-packages/ray/rllib/core/rl_module/marl_module.py", line 58, in __init__
    super().__init__(config or MultiAgentRLModuleConfig())
  File "...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 307, in __init__
    self.setup()
  File "...../lib/python3.11/site-packages/ray/rllib/core/rl_module/marl_module.py", line 65, in setup
    self._rl_modules[module_id] = module_spec.build()
                                  ^^^^^^^^^^^^^^^^^^^
  File "...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 104, in build
    module = self.module_class(module_config)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 315, in new_init
    previous_init(self, *args, **kwargs)
  File "...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 315, in new_init
    previous_init(self, *args, **kwargs)
  File "...../lib/python3.11/site-packages/ray/rllib/core/rl_module/torch/torch_rl_module.py", line 82, in __init__
    RLModule.__init__(self, *args, **kwargs)
  File "...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 307, in __init__
    self.setup()
  File "...../lib/python3.11/site-packages/ray/rllib/algorithms/ppo/ppo_rl_module.py", line 20, in setup
    catalog = self.config.get_catalog()
              ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 189, in get_catalog
    return self.catalog_class(
           ^^^^^^^^^^^^^^^^^^^
  File "...../lib/python3.11/site-packages/ray/rllib/algorithms/ppo/ppo_catalog.py", line 69, in __init__
    super().__init__(
  File "...../lib/python3.11/site-packages/ray/rllib/core/models/catalog.py", line 111, in __init__
    self._determine_components_hook()
  File "...../lib/python3.11/site-packages/ray/rllib/core/models/catalog.py", line 131, in _determine_components_hook
    self._encoder_config = self._get_encoder_config(
                           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "...../lib/python3.11/site-packages/ray/rllib/core/models/catalog.py", line 283, in _get_encoder_config
    raise NotImplementedError
NotImplementedError

During handling of the above exception, another exception occurred:

ray::PPO.__init__() (pid=46330, ip=127.0.0.1, actor_id=e38188a850a22e98ba75747301000000, repr=PPO)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "...../lib/python3.11/site-packages/ray/rllib/algorithms/algorithm.py", line 517, in __init__
    super().__init__(
  File "...../lib/python3.11/site-packages/ray/tune/trainable/trainable.py", line 169, in __init__
    self.setup(copy.deepcopy(self.config))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "...../lib/python3.11/site-packages/ray/rllib/algorithms/algorithm.py", line 639, in setup
    self.workers = WorkerSet(
                   ^^^^^^^^^^
  File "...../lib/python3.11/site-packages/ray/rllib/evaluation/worker_set.py", line 179, in __init__
    raise e.args[0].args[2]
NotImplementedError

Versions / Dependencies

RLLIB : 2.6.3

Reproduction script

Ran this script with --framework torch and manually set use_attention to True under model config

https://github.com/ray-project/ray/blob/b31343a8afcafef0fbcf7e81f102aa947870265f/rllib/examples/attention_net.py

Issue Severity

High: It blocks me from completing my task.

sven1977 commented 12 months ago

Hey @AvisP , can you try the same but with the following slight config changes?

config._enable_rl_module_api = False
config._enable_learner_api = False

The reason is that this example only works on the "old API stack" and PPO uses the new stack already by default (that's why you should switch it off via the above config changes). We are currently moving all examples also to the new stack, but bear with us as this might take a while.

sven1977 commented 12 months ago

Basically, do this on your config object:

config = ...
config.training(_enable_learner_api=False)
config.rl_module(_enable_rl_module_api=False)
AvisP commented 12 months ago

Thanks for your prompt response! Your suggestion solved the problem.

With a separate issue, I have a custom environment (which is really complicated and has dependencies on multiple other packages) that I am struggling to execute with RLLIB LSTM and attention networks, although it runs successfully with normal feedforward networks.

I had posted an issue here but I was wondering if there is a private forum to discuss this rather than post it here as an issue. Thanks

AvisP commented 11 months ago

So I tried to manage to replicate the issue with an example environment StatelessCartPole. When I am running the following script

from ray.rllib.examples.env.stateless_cartpole import StatelessCartPole
from ray.rllib.algorithms.ppo import PPO

model_dict={"use_attention": True,
                "max_seq_len": 10,
                "attention_num_transformer_units": 1,
                "attention_dim": 32,
                "attention_memory_inference": 10,
                "attention_memory_training": 10,
                "attention_num_heads": 1,
                "attention_head_dim": 32,
                "attention_position_wise_mlp_dim": 32,
            }

nn_config= {
        # config to pass to env class
        # "env_config": env_config,
        #neural network config
        "lr": 0.003,
        "model": model_dict,
        "gamma": 0.95,
        "train_batch_size":20_000,
        "num_rollout_worker":1,
        "training": {"_enable_learner_api": False},
        "rl_module": {'_enable_rl_module_api':False},
    }

nn_kwargs = {"env": StatelessCartPole,
            "config": nn_config
                    }

a = PPO(**nn_kwargs)

print(a)

I am getting the following error message

023-09-26 13:01:32,694 ERROR actor_manager.py:500 -- Ray error, taking actor 1 out of service. The actor died because of an error raised in its creation task, ray::RolloutWorker.__init__() (pid=49587, ip=127.0.0.1, actor_id=aa728f30adf8893bd810948a01000000, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x103033390>)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/evaluation/rollout_worker.py", line 525, in __init__
    self._update_policy_map(policy_dict=self.policy_dict)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/evaluation/rollout_worker.py", line 1727, in _update_policy_map
    self._build_policy_map(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/evaluation/rollout_worker.py", line 1838, in _build_policy_map
    new_policy = create_policy_for_framework(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/utils/policy.py", line 142, in create_policy_for_framework
    return policy_class(observation_space, action_space, merged_config)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/algorithms/ppo/ppo_torch_policy.py", line 49, in __init__
    TorchPolicyV2.__init__(
  File "/...../lib/python3.11/site-packages/ray/rllib/policy/torch_policy_v2.py", line 92, in __init__
    model = self.make_rl_module()
            ^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/policy/policy.py", line 424, in make_rl_module
    marl_module = marl_spec.build()
                  ^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/marl_module.py", line 462, in build
    module = self.marl_module_class(module_config)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 315, in new_init
    previous_init(self, *args, **kwargs)
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/marl_module.py", line 58, in __init__
    super().__init__(config or MultiAgentRLModuleConfig())
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 307, in __init__
    self.setup()
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/marl_module.py", line 65, in setup
    self._rl_modules[module_id] = module_spec.build()
                                  ^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 104, in build
    module = self.module_class(module_config)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 315, in new_init
    previous_init(self, *args, **kwargs)
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 315, in new_init
    previous_init(self, *args, **kwargs)
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/torch/torch_rl_module.py", line 82, in __init__
    RLModule.__init__(self, *args, **kwargs)
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 307, in __init__
    self.setup()
  File "/...../lib/python3.11/site-packages/ray/rllib/algorithms/ppo/ppo_rl_module.py", line 20, in setup
    catalog = self.config.get_catalog()
              ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 189, in get_catalog
    return self.catalog_class(
           ^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/algorithms/ppo/ppo_catalog.py", line 69, in __init__
    super().__init__(
  File "/...../lib/python3.11/site-packages/ray/rllib/core/models/catalog.py", line 111, in __init__
    self._determine_components_hook()
  File "/...../lib/python3.11/site-packages/ray/rllib/core/models/catalog.py", line 131, in _determine_components_hook
    self._encoder_config = self._get_encoder_config(
                           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/core/models/catalog.py", line 283, in _get_encoder_config
    raise NotImplementedError
NotImplementedError
2023-09-26 13:01:32,697 ERROR actor_manager.py:500 -- Ray error, taking actor 2 out of service. The actor died because of an error raised in its creation task, ray::RolloutWorker.__init__() (pid=49588, ip=127.0.0.1, actor_id=68806610df392e6e5d041b2e01000000, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x106e76010>)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/evaluation/rollout_worker.py", line 525, in __init__
    self._update_policy_map(policy_dict=self.policy_dict)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/evaluation/rollout_worker.py", line 1727, in _update_policy_map
    self._build_policy_map(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/evaluation/rollout_worker.py", line 1838, in _build_policy_map
    new_policy = create_policy_for_framework(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/utils/policy.py", line 142, in create_policy_for_framework
    return policy_class(observation_space, action_space, merged_config)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/algorithms/ppo/ppo_torch_policy.py", line 49, in __init__
    TorchPolicyV2.__init__(
  File "/...../lib/python3.11/site-packages/ray/rllib/policy/torch_policy_v2.py", line 92, in __init__
    model = self.make_rl_module()
            ^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/policy/policy.py", line 424, in make_rl_module
    marl_module = marl_spec.build()
                  ^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/marl_module.py", line 462, in build
    module = self.marl_module_class(module_config)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 315, in new_init
    previous_init(self, *args, **kwargs)
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/marl_module.py", line 58, in __init__
    super().__init__(config or MultiAgentRLModuleConfig())
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 307, in __init__
    self.setup()
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/marl_module.py", line 65, in setup
    self._rl_modules[module_id] = module_spec.build()
                                  ^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 104, in build
    module = self.module_class(module_config)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 315, in new_init
    previous_init(self, *args, **kwargs)
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 315, in new_init
    previous_init(self, *args, **kwargs)
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/torch/torch_rl_module.py", line 82, in __init__
    RLModule.__init__(self, *args, **kwargs)
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 307, in __init__
    self.setup()
  File "/...../lib/python3.11/site-packages/ray/rllib/algorithms/ppo/ppo_rl_module.py", line 20, in setup
    catalog = self.config.get_catalog()
              ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 189, in get_catalog
    return self.catalog_class(
           ^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/algorithms/ppo/ppo_catalog.py", line 69, in __init__
    super().__init__(
  File "/...../lib/python3.11/site-packages/ray/rllib/core/models/catalog.py", line 111, in __init__
    self._determine_components_hook()
  File "/...../lib/python3.11/site-packages/ray/rllib/core/models/catalog.py", line 131, in _determine_components_hook
    self._encoder_config = self._get_encoder_config(
                           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/core/models/catalog.py", line 283, in _get_encoder_config
    raise NotImplementedError
NotImplementedError
Traceback (most recent call last):
  File "/...../lib/python3.11/site-packages/ray/rllib/evaluation/worker_set.py", line 157, in __init__
    self._setup(
  File "/...../lib/python3.11/site-packages/ray/rllib/evaluation/worker_set.py", line 227, in _setup
    self.add_workers(
  File "/...../lib/python3.11/site-packages/ray/rllib/evaluation/worker_set.py", line 593, in add_workers
    raise result.get()
  File "/...../lib/python3.11/site-packages/ray/rllib/utils/actor_manager.py", line 481, in __fetch_result
    result = ray.get(r)
             ^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/_private/auto_init_hook.py", line 24, in auto_init_wrapper
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/_private/client_mode_hook.py", line 103, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/_private/worker.py", line 2526, in get
    raise value
ray.exceptions.RayActorError: The actor died because of an error raised in its creation task, ray::RolloutWorker.__init__() (pid=49587, ip=127.0.0.1, actor_id=aa728f30adf8893bd810948a01000000, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x103033390>)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/evaluation/rollout_worker.py", line 525, in __init__
    self._update_policy_map(policy_dict=self.policy_dict)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/evaluation/rollout_worker.py", line 1727, in _update_policy_map
    self._build_policy_map(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/evaluation/rollout_worker.py", line 1838, in _build_policy_map
    new_policy = create_policy_for_framework(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/utils/policy.py", line 142, in create_policy_for_framework
    return policy_class(observation_space, action_space, merged_config)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/algorithms/ppo/ppo_torch_policy.py", line 49, in __init__
    TorchPolicyV2.__init__(
  File "/...../lib/python3.11/site-packages/ray/rllib/policy/torch_policy_v2.py", line 92, in __init__
    model = self.make_rl_module()
            ^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/policy/policy.py", line 424, in make_rl_module
    marl_module = marl_spec.build()
                  ^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/marl_module.py", line 462, in build
    module = self.marl_module_class(module_config)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 315, in new_init
    previous_init(self, *args, **kwargs)
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/marl_module.py", line 58, in __init__
    super().__init__(config or MultiAgentRLModuleConfig())
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 307, in __init__
    self.setup()
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/marl_module.py", line 65, in setup
    self._rl_modules[module_id] = module_spec.build()
                                  ^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 104, in build
    module = self.module_class(module_config)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 315, in new_init
    previous_init(self, *args, **kwargs)
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 315, in new_init
    previous_init(self, *args, **kwargs)
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/torch/torch_rl_module.py", line 82, in __init__
    RLModule.__init__(self, *args, **kwargs)
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 307, in __init__
    self.setup()
  File "/...../lib/python3.11/site-packages/ray/rllib/algorithms/ppo/ppo_rl_module.py", line 20, in setup
    catalog = self.config.get_catalog()
              ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 189, in get_catalog
    return self.catalog_class(
           ^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/algorithms/ppo/ppo_catalog.py", line 69, in __init__
    super().__init__(
  File "/...../lib/python3.11/site-packages/ray/rllib/core/models/catalog.py", line 111, in __init__
    self._determine_components_hook()
  File "/...../lib/python3.11/site-packages/ray/rllib/core/models/catalog.py", line 131, in _determine_components_hook
    self._encoder_config = self._get_encoder_config(
                           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/core/models/catalog.py", line 283, in _get_encoder_config
    raise NotImplementedError
NotImplementedError
 
During handling of the above exception, another exception occurred:
 
Traceback (most recent call last):
  File "/Users/paula/Desktop/Projects/RL Practice/RLLIB_Practice4/stateless_cartpole_attention.py", line 40, in <module>
    a = PPO(**nn_kwargs)
        ^^^^^^^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/algorithms/algorithm.py", line 517, in __init__
    super().__init__(
  File "/...../lib/python3.11/site-packages/ray/tune/trainable/trainable.py", line 169, in __init__
    self.setup(copy.deepcopy(self.config))
  File "/...../lib/python3.11/site-packages/ray/rllib/algorithms/algorithm.py", line 639, in setup
    self.workers = WorkerSet(
                   ^^^^^^^^^^
  File "/...../lib/python3.11/site-packages/ray/rllib/evaluation/worker_set.py", line 179, in __init__
    raise e.args[0].args[2]
NotImplementedError
(RolloutWorker pid=49588) 2023-09-26 13:01:32,674       WARNING algorithm_config.py:2558 -- Setting `exploration_config={}` because you set `_enable_rl_module_api=True`. When RLModule API are enabled, exploration_config can not be set. If you want to implement custom exploration behaviour, please modify the `forward_exploration` method of the RLModule at hand. On configs that have a default exploration config, this must be done with `config.exploration_config={}`.
(RolloutWorker pid=49588) Exception raised in creation task: The actor died because of an error raised in its creation task, ray::RolloutWorker.__init__() (pid=49588, ip=127.0.0.1, actor_id=68806610df392e6e5d041b2e01000000, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x106e76010>)
(RolloutWorker pid=49588)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(RolloutWorker pid=49588)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(RolloutWorker pid=49588)   File "/...../lib/python3.11/site-packages/ray/rllib/evaluation/rollout_worker.py", line 525, in __init__
(RolloutWorker pid=49588)     self._update_policy_map(policy_dict=self.policy_dict)
(RolloutWorker pid=49588)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(RolloutWorker pid=49588)   File "/...../lib/python3.11/site-packages/ray/rllib/evaluation/rollout_worker.py", line 1727, in _update_policy_map
(RolloutWorker pid=49588)     self._build_policy_map(
(RolloutWorker pid=49588)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(RolloutWorker pid=49588)   File "/...../lib/python3.11/site-packages/ray/rllib/evaluation/rollout_worker.py", line 1838, in _build_policy_map
(RolloutWorker pid=49588)     new_policy = create_policy_for_framework(
(RolloutWorker pid=49588)                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(RolloutWorker pid=49588)   File "/...../lib/python3.11/site-packages/ray/rllib/utils/policy.py", line 142, in create_policy_for_framework
(RolloutWorker pid=49588)     return policy_class(observation_space, action_space, merged_config)
(RolloutWorker pid=49588)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(RolloutWorker pid=49588)   File "/...../lib/python3.11/site-packages/ray/rllib/algorithms/ppo/ppo_torch_policy.py", line 49, in __init__
(RolloutWorker pid=49588)     TorchPolicyV2.__init__(
(RolloutWorker pid=49588)   File "/...../lib/python3.11/site-packages/ray/rllib/policy/torch_policy_v2.py", line 92, in __init__
(RolloutWorker pid=49588)     model = self.make_rl_module()
(RolloutWorker pid=49588)             ^^^^^^^^^^^^^^^^^^^^^
(RolloutWorker pid=49588)   File "/...../lib/python3.11/site-packages/ray/rllib/policy/policy.py", line 424, in make_rl_module
(RolloutWorker pid=49588)     marl_module = marl_spec.build()
(RolloutWorker pid=49588)                   ^^^^^^^^^^^^^^^^^
(RolloutWorker pid=49588)   File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/marl_module.py", line 462, in build
(RolloutWorker pid=49588)     module = self.marl_module_class(module_config)
(RolloutWorker pid=49588)              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(RolloutWorker pid=49588)   File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 315, in new_init
(RolloutWorker pid=49588)     previous_init(self, *args, **kwargs)
(RolloutWorker pid=49588)   File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/marl_module.py", line 58, in __init__
(RolloutWorker pid=49588)     super().__init__(config or MultiAgentRLModuleConfig())
(RolloutWorker pid=49588)   File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 307, in __init__
(RolloutWorker pid=49588)     self.setup()
(RolloutWorker pid=49588)   File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/marl_module.py", line 65, in setup
(RolloutWorker pid=49588)     self._rl_modules[module_id] = module_spec.build()
(RolloutWorker pid=49588)                                   ^^^^^^^^^^^^^^^^^^^
(RolloutWorker pid=49588)   File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 104, in build
(RolloutWorker pid=49588)     module = self.module_class(module_config)
(RolloutWorker pid=49588)              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(RolloutWorker pid=49588)   File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 315, in new_init
(RolloutWorker pid=49588)     previous_init(self, *args, **kwargs)
(RolloutWorker pid=49588)   File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 315, in new_init
(RolloutWorker pid=49588)     previous_init(self, *args, **kwargs)
(RolloutWorker pid=49588)   File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/torch/torch_rl_module.py", line 82, in __init__
(RolloutWorker pid=49588)     RLModule.__init__(self, *args, **kwargs)
(RolloutWorker pid=49588)   File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 307, in __init__
(RolloutWorker pid=49588)     self.setup()
(RolloutWorker pid=49588)   File "/...../lib/python3.11/site-packages/ray/rllib/algorithms/ppo/ppo_rl_module.py", line 20, in setup
(RolloutWorker pid=49588)     catalog = self.config.get_catalog()
(RolloutWorker pid=49588)               ^^^^^^^^^^^^^^^^^^^^^^^^^
(RolloutWorker pid=49588)   File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 189, in get_catalog
(RolloutWorker pid=49588)     return self.catalog_class(
(RolloutWorker pid=49588)            ^^^^^^^^^^^^^^^^^^^
(RolloutWorker pid=49588)   File "/...../lib/python3.11/site-packages/ray/rllib/algorithms/ppo/ppo_catalog.py", line 69, in __init__
(RolloutWorker pid=49588)     super().__init__(
(RolloutWorker pid=49588)   File "/...../lib/python3.11/site-packages/ray/rllib/core/models/catalog.py", line 111, in __init__
(RolloutWorker pid=49588)     self._determine_components_hook()
(RolloutWorker pid=49588)   File "/...../lib/python3.11/site-packages/ray/rllib/core/models/catalog.py", line 131, in _determine_components_hook
(RolloutWorker pid=49588)     self._encoder_config = self._get_encoder_config(
(RolloutWorker pid=49588)                            ^^^^^^^^^^^^^^^^^^^^^^^^
(RolloutWorker pid=49588)   File "/...../lib/python3.11/site-packages/ray/rllib/core/models/catalog.py", line 283, in _get_encoder_config
(RolloutWorker pid=49588)     raise NotImplementedError
(RolloutWorker pid=49588) NotImplementedError
(RolloutWorker pid=49587) 2023-09-26 13:01:32,673       WARNING env.py:162 -- Your env doesn't have a .spec.max_episode_steps attribute. Your horizon will default to infinity, and your environment will not be reset.
(pid=49587) DeprecationWarning: `DirectStepOptimizer` has been deprecated. This will raise an error in the future!
(RolloutWorker pid=49587) 2023-09-26 13:01:32,674       WARNING algorithm_config.py:2558 -- Setting `exploration_config={}` because you set `_enable_rl_module_api=True`. When RLModule API are enabled, exploration_config can not be set. If you want to implement custom exploration behaviour, please modify the `forward_exploration` method of the RLModule at hand. On configs that have a default exploration config, this must be done with `config.exploration_config={}`.
(RolloutWorker pid=49587) Exception raised in creation task: The actor died because of an error raised in its creation task, ray::RolloutWorker.__init__() (pid=49587, ip=127.0.0.1, actor_id=aa728f30adf8893bd810948a01000000, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x103033390>)
(RolloutWorker pid=49587)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(RolloutWorker pid=49587)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [repeated 3x across cluster] (Ray deduplicates logs by default. Set RAY_DEDUP_LOGS=0 to disable log deduplication, or see https://docs.ray.io/en/master/ray-observability/ray-logging.html#log-deduplication for more options.)
(RolloutWorker pid=49587)   File "/...../lib/python3.11/site-packages/ray/rllib/core/models/catalog.py", line 111, in __init__ [repeated 9x across cluster]
(RolloutWorker pid=49587)     self._update_policy_map(policy_dict=self.policy_dict)
(RolloutWorker pid=49587)   File "/...../lib/python3.11/site-packages/ray/rllib/evaluation/rollout_worker.py", line 1727, in _update_policy_map
(RolloutWorker pid=49587)     self._build_policy_map(
(RolloutWorker pid=49587)   File "/...../lib/python3.11/site-packages/ray/rllib/evaluation/rollout_worker.py", line 1838, in _build_policy_map
(RolloutWorker pid=49587)     new_policy = create_policy_for_framework(
(RolloutWorker pid=49587)                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(RolloutWorker pid=49587)   File "/...../lib/python3.11/site-packages/ray/rllib/utils/policy.py", line 142, in create_policy_for_framework
(RolloutWorker pid=49587)     return policy_class(observation_space, action_space, merged_config)
(RolloutWorker pid=49587)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(RolloutWorker pid=49587)     TorchPolicyV2.__init__(
(RolloutWorker pid=49587)     model = self.make_rl_module()
(RolloutWorker pid=49587)             ^^^^^^^^^^^^^^^^^^^^^
(RolloutWorker pid=49587)   File "/...../lib/python3.11/site-packages/ray/rllib/policy/policy.py", line 424, in make_rl_module
(RolloutWorker pid=49587)     marl_module = marl_spec.build()
(RolloutWorker pid=49587)                   ^^^^^^^^^^^^^^^^^
(RolloutWorker pid=49587)   File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 104, in build [repeated 2x across cluster]
(RolloutWorker pid=49587)     module = self.marl_module_class(module_config)
(RolloutWorker pid=49587)              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(RolloutWorker pid=49587)   File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 315, in new_init [repeated 3x across cluster]
(RolloutWorker pid=49587)     previous_init(self, *args, **kwargs) [repeated 3x across cluster]
(RolloutWorker pid=49587)     super().__init__(config or MultiAgentRLModuleConfig())
(RolloutWorker pid=49587)     self.setup() [repeated 2x across cluster]
(RolloutWorker pid=49587)   File "/...../lib/python3.11/site-packages/ray/rllib/algorithms/ppo/ppo_rl_module.py", line 20, in setup [repeated 2x across cluster]
(RolloutWorker pid=49587)     self._rl_modules[module_id] = module_spec.build()
(RolloutWorker pid=49587)            ^^^^^^^^^^^^^^^^^^^ [repeated 2x across cluster]
(RolloutWorker pid=49587)     module = self.module_class(module_config)
(RolloutWorker pid=49587)              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(RolloutWorker pid=49587)     RLModule.__init__(self, *args, **kwargs)
(RolloutWorker pid=49587)     catalog = self.config.get_catalog()
(RolloutWorker pid=49587)                            ^^^^^^^^^^^^^^^^^^^^^^^^^ [repeated 2x across cluster]
(RolloutWorker pid=49587)   File "/...../lib/python3.11/site-packages/ray/rllib/core/rl_module/rl_module.py", line 189, in get_catalog
(RolloutWorker pid=49587)     return self.catalog_class(
(RolloutWorker pid=49587)     super().__init__(
(RolloutWorker pid=49587)     self._determine_components_hook()
(RolloutWorker pid=49587)   File "/...../lib/python3.11/site-packages/ray/rllib/core/models/catalog.py", line 131, in _determine_components_hook
(RolloutWorker pid=49587)     self._encoder_config = self._get_encoder_config(
(RolloutWorker pid=49587)   File "/...../lib/python3.11/site-packages/ray/rllib/core/models/catalog.py", line 283, in _get_encoder_config
(RolloutWorker pid=49587)     raise NotImplementedError
(RolloutWorker pid=49587) NotImplementedError

Is it related to the issue you mentioned? and how can i resolve it? Thanks