ray-project / ray

Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
https://ray.io
Apache License 2.0
32.14k stars 5.48k forks source link

RLLib: self_play_league_based_with_open_spiel.py #45626

Open destin-v opened 1 month ago

destin-v commented 1 month ago

What happened + What you expected to happen

The example script self_play_league_based_with_open_spiel.py found here does not work. I was using the RLModule API set to true, and it fails with the message:

.../lib/python3.10/site-packages/ray/rllib/utils/metrics/metrics_logger.py", line 772, in _get_key
    _dict = _dict[key]
KeyError: 'mean_kl_loss'

The MetricLogger being used is in "alpha" stage:

@PublicAPI(stability="alpha")
class MetricsLogger:
   ...

This was running with pure PPO_trainer without Tune. I looked through the documentation to see if there was a way to turn off certain metrics but didn't find it. Either the MetricsLogger needs to be fixed or I need a way to turn off the metric it is trying to log.

Versions / Dependencies

ray 2.23.0

Reproduction script

Running self_play_league_based_with_open_spiel.py found here without tune and the new API stack will generate an error.

Issue Severity

High: It blocks me from completing my task.

destin-v commented 1 month ago

Upon further investigation I discovered what triggers the error. Below is a simple example of CartPole-v1 where this error shows up. If you add the input argument policies_to_train=["p0"] to the PPOConfig it will error out. If you do not add policies_to_train=["p0"], it will run.

ppo_config = (
                PPOConfig()
                .environment(MultiAgentCartPole, env_config={"num_agents": 2})
                # Switch both the new API stack flags to True (both False by default).
                # This enables the use of:
                # a) RLModule (replaces ModelV2) and Learner (replaces Policy)
                # b) and automatically picks the correct EnvRunner (single-agent vs multi-agent) and enables ConnectorV2 support.
                .api_stack(
                    enable_rl_module_and_learner=True,
                    enable_env_runner_and_connector_v2=True,
                )
                .resources(
                    num_cpus_for_main_process=16,
                )
                # supports arbitrary scaling on the learner axis, feel free to set
                # `num_learners` to the number of available GPUs for multi-GPU training (and `num_gpus_per_learner=1`).
                .learners(
                    num_learners=0,  # <- set this value to the number of GPUs
                    num_gpus_per_learner=0,  # <- set this to 1, if you have a GPU
                )
                .training(train_batch_size_per_learner=5000)
                # Because you are in a multi-agent env, you have to set up the usual multi-agent parameters:
                .multi_agent(
                    policies={"p0", "p1"},
                    # Map agent 0 to p0 and agent 1 to p1.
                    policy_mapping_fn=lambda agent_id, episode, **kwargs: f"p{agent_id}",
                    policies_to_train=["p0"],
                )
            )