CILAB-CT-GAME / RaidEnv

RaidEnv: Exploring New Challenges in Automated Content Balancing for Boss Raid Games
MIT License
3 stars 0 forks source link

PCGRL/run_experiment.sh errors #2

Closed jiabinfan closed 2 weeks ago

jiabinfan commented 2 months ago

If I run your PCGRL/run_experiment.sh, I get errors like this.

Version information: ml-agents: 0.29.0, ml-agents-envs: 0.29.0, Communicator API: 1.5.0, PyTorch: 1.7.1 [INFO] Learning was interrupted. Please wait while the graph is generated. Traceback (most recent call last): File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/subprocess_env_manager.py", line 92, in send self.conn.send(req) File "/opt/conda/lib/python3.7/multiprocessing/connection.py", line 206, in send self._send_bytes(_ForkingPickler.dumps(obj)) File "/opt/conda/lib/python3.7/multiprocessing/connection.py", line 404, in _send_bytes self._send(header + buf) File "/opt/conda/lib/python3.7/multiprocessing/connection.py", line 368, in _send n = write(self._handle, buf) BrokenPipeError: [Errno 32] Broken pipe

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/opt/conda/bin/mlagents-learn", line 8, in sys.exit(main()) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/learn.py", line 260, in main run_cli(parse_command_line()) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/learn.py", line 256, in run_cli run_training(run_seed, options, num_areas) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/learn.py", line 132, in run_training tc.start_learning(env_manager) File "/opt/mlagents_envs/mlagents_envs/timers.py", line 305, in wrapped return func(*args, *kwargs) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/trainer_controller.py", line 198, in start_learning raise ex File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/trainer_controller.py", line 173, in start_learning self._reset_env(env_manager) File "/opt/mlagents_envs/mlagents_envs/timers.py", line 305, in wrapped return func(args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/trainer_controller.py", line 105, in _reset_env env_manager.reset(config=new_config) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/env_manager.py", line 68, in reset self.first_step_infos = self._reset_env(config) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/subprocess_env_manager.py", line 440, in _reset_env self.set_env_parameters(config) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/subprocess_env_manager.py", line 457, in set_env_parameters ew.send(EnvironmentCommand.ENVIRONMENT_PARAMETERS, config) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/subprocess_env_manager.py", line 94, in send raise UnityCommunicationException("UnityEnvironment worker: send failed.") mlagents_envs.exception.UnityCommunicationException: UnityEnvironment worker: send failed.

Besides, I am very confused about your PCGRL implementation. It seems all you have here is "run_experiments.sh", and you pass your training parameters to mlagent in run_experments.sh by a .yaml. Where are codes? For example, where do you define actions, state representations, rewards, and how to introduce more skill into state features? if I want to introduce more skills or use different RL settings for actions and rewards, which part of the code should I take a look?

bic4907 commented 2 months ago

Sorry for the late response, here's the repsonses for your question.

Issue 1: UnityEnvironment worker: send failed. Error

(mlagents) D:\RaidEnv-v1\RaidEnv\Build\Win>mlagents-learn pcg_winRate-0.5-1.0.yaml --env MMORPG.exe

            ┐  ╖
        ╓╖╬│╡  ││╬╖╖
    ╓╖╬│││││┘  ╬│││││╬╖
 ╖╬│││││╬╜        ╙╬│││││╖╖                               ╗╗╗
 ╬╬╬╬╖││╦╖        ╖╬││╗╣╣╣╬      ╟╣╣╬    ╟╣╣╣             ╜╜╜  ╟╣╣
 ╬╬╬╬╬╬╬╬╖│╬╖╖╓╬╪│╓╣╣╣╣╣╣╣╬      ╟╣╣╬    ╟╣╣╣ ╒╣╣╖╗╣╣╣╗   ╣╣╣ ╣╣╣╣╣╣ ╟╣╣╖   ╣╣╣
 ╬╬╬╬┐  ╙╬╬╬╬│╓╣╣╣╝╜  ╫╣╣╣╬      ╟╣╣╬    ╟╣╣╣ ╟╣╣╣╙ ╙╣╣╣  ╣╣╣ ╙╟╣╣╜╙  ╫╣╣  ╟╣╣
 ╬╬╬╬┐     ╙╬╬╣╣      ╫╣╣╣╬      ╟╣╣╬    ╟╣╣╣ ╟╣╣╬   ╣╣╣  ╣╣╣  ╟╣╣     ╣╣╣┌╣╣╜
 ╬╬╬╜       ╬╬╣╣      ╙╝╣╣╬      ╙╣╣╣╗╖╓╗╣╣╣╜ ╟╣╣╬   ╣╣╣  ╣╣╣  ╟╣╣╦╓    ╣╣╣╣╣
 ╙   ╓╦╖    ╬╬╣╣   ╓╗╗╖            ╙╝╣╣╣╣╝╜   ╘╝╝╜   ╝╝╝  ╝╝╝   ╙╣╣╣    ╟╣╣╣
   ╩╬╬╬╬╬╬╦╦╬╬╣╣╗╣╣╣╣╣╣╣╝                                             ╫╣╣╣╣
      ╙╬╬╬╬╬╬╬╣╣╣╣╣╣╝╜
          ╙╬╬╬╣╣╣╜
             ╙

 Version information:
  ml-agents: 0.29.0,
  ml-agents-envs: 0.29.0,
  Communicator API: 1.5.0,
  PyTorch: 1.11.0+cu113
[INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0

The 'Broken pipe' error occurs when the python client failed to connect with Unity game client. If the game successfully connected with the game client, you can find the below message from the console. [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 ddd

Please make sure you have mount the correct build path ($(pwd)/../../../RaidEnv/Build/Linux:/game) to the /game internal path. In my case, the game build was located in RaidEnv/Build/Linux/MMORPG.x86_64.

Or, please edit this part of $(pwd)/../../../RaidEnv/Build/Linux:/game in the run_experiment.sh file.

docker run --rm -t --gpus all --name $exp_name$exp_seed -v $(pwd):/config -v $(pwd)/../../../RaidEnv/Build/Linux:/game -v /mnt/nas/MMORPG/PCG/PCGRL:/workspace/results inchang/ct_game /bin/bash -c "chmod -R 755 /game && CUDA_VISIBLE_DEVICES=$allocated_gpu mlagents-learn /config/$_file --env /game/MMORPG.x86_64 --num-envs 16 --no-graphics --run-id $exp_name$exp_seed" > ./exp_logs/$exp_name$exp_seed.log 2>&1 &

Please try my suggestion and leave your further feedback.

Issue 2: PCGRL Implementation

The PCGRL implementation is coded in the below CSharp file path (PCGAgent.cs). RaidEnv\Assets\ML-Agents\Examples\MMORPG\Scripts\Agent\PCGAgent\PCGAgent.cs In this version of RaidEnv, we used range, cool time, cast time, and damage parameter and the index is 7, 8, 9, and 15, respectively.

State

The CollectObservations function determines the observation of the PCG agent. This version of RaidEnv using only four values (cause of the homogeneous agent settings.) Append the line of codes on the below to add skill parameters.

sensor.AddObservation(GetMinMaxScaledValue(skill[7], thresHold.range[0], thresHold.range[1]));
sensor.AddObservation(GetMinMaxScaledValue(skill[8], thresHold.cooltime[0], thresHold.cooltime[1]));
sensor.AddObservation(GetMinMaxScaledValue(skill[9], thresHold.casttime[0], thresHold.casttime[1]));
sensor.AddObservation(GetMinMaxScaledValue(skill[15], thresHold.value[0], thresHold.value[1]));

Action

The action is decrease/increase of the skill parameters; the value is moved with the tickScale (1/6 of the each range). For example, the action value 4 for 'cool time' induces 1/6 increasement of a tick of 'cool time' (i.e., (max-min) / 6) The index of the _lastGeneratedSkill represents the each parameter (e.g., range, cool time, cast time, and damage) of the skills.

public void GenerateSkill(ActionSegment<int> _act)
{

  <omitted>

    // Convert action to float
    float[] act = new float[_act.Length];
    for(int i = 0; i < _act.Length; i++)
    {
        switch(_act[i])
        {
            case 0:
                act[i] = -1.0f;
                break;
            case 1:
                act[i] = -0.5f;
                break;
            case 2:
                act[i] = 0;
                break;
            case 3:
                act[i] = +0.5f;
                break;
            case 4:
                act[i] = +1.0f;
                break;

        }
    }

    SkillGenerator.MinMaxThreshold thresHold = new SkillGenerator.MinMaxThreshold();

    if(changeParameterDirectly == false)
    {
        float tickScale = 6;

        // Get range value from MineMaxThreshold in SkillGenerator
        float rangeTick = (thresHold.range[1] - thresHold.range[0]) / tickScale;
        float coolTick = (thresHold.cooltime[1] - thresHold.cooltime[0]) / tickScale;
        float castTimeTick = (thresHold.casttime[1] - thresHold.casttime[0]) / tickScale;
        float valueTick = (thresHold.value[1] - thresHold.value[0]) /tickScale;

        _lastGeneratedSkill[7] += rangeTick * act[0];
        _lastGeneratedSkill[8] += coolTick * act[1];
        _lastGeneratedSkill[9] += castTimeTick * act[2];
        _lastGeneratedSkill[15] += valueTick * act[3];

        _lastGeneratedSkill[7] = ActionLimiting(_lastGeneratedSkill[7], thresHold.range[0], thresHold.range[1]);
        _lastGeneratedSkill[8] = ActionLimiting(_lastGeneratedSkill[8], thresHold.cooltime[0], thresHold.cooltime[1]);
        _lastGeneratedSkill[9] = ActionLimiting(_lastGeneratedSkill[9], thresHold.casttime[0], thresHold.casttime[1]);
        _lastGeneratedSkill[15] = ActionLimiting(_lastGeneratedSkill[15], thresHold.value[0], thresHold.value[1]);
    }

I believe you can append the generated parameter of PCG agent by noting below information which present the index-value pair.
L#173 of RaidEnv\Assets\ML-Agents\Examples\MMORPG\Scripts\Generator\SkillGenerator.cs The index of the parameter of '-1' value of the annotatation. (e.g., '8 - 1' => index of the 'range')
Add the codes into the GenerateSkill function with the desired index of the skill paramter. You may regulate the threshold value to approciatte problem complexity.
Append (increase) action space of 'PCGAgent' in the Unity inspector.

  generatedSkill.Add((float)(int)triggerType); // 1
  generatedSkill.Add((float)(int)magicSchool);  // 2
  generatedSkill.Add((float)(int)hitType);  // 3
  generatedSkill.Add((float)(int)targetType);  // 4
  generatedSkill.Add((float)projectileSpeed);  // 5
  generatedSkill.Add((canUseToAlly == true ? 1.0f : 0.0f));   // 6
  generatedSkill.Add((canUseToEnemy == true ? 1.0f : 0.0f));  // 7
  generatedSkill.Add(range);  // 8
  generatedSkill.Add(cooltime);  // 9
  generatedSkill.Add(casttime);   // 10
  generatedSkill.Add((float)cost);    // 11
  generatedSkill.Add((float)maximumCharge); // Do not touch this  // 12
  generatedSkill.Add((float)maximumCharge);  // 13
  generatedSkill.Add((canCastWhileCasting == true ? 1.0f : 0.0f));  // 14
  generatedSkill.Add((canCastWhileChanneling == true ? 1.0f : 0.0f));  // 15
  generatedSkill.Add(value);  // 16
  generatedSkill.Add(0f); // projectileFX.type // 17
  generatedSkill.Add(0f); // projectileFX.size  // 18
  generatedSkill.Add(1f); // hitCount  // 19

Reward

Note the CalculateReward function. The function gets the previous and current winrates and returns a reward value, which decreasement on the L1 error.

  protected float CalculateReward(float newValue, float oldValue, float targetValue)
  {
      float oldError = Mathf.Abs(targetValue - oldValue);
      float newError = Mathf.Abs(targetValue - newValue);

      // Return the difference between the new error and the old error
      return oldError - newError;
  }

Best wishes!

jiabinfan commented 2 months ago

Thank you very much for your response. I have a much better understanding of this project now. However, I still have trouble running PCGRL/run_experiment.sh.

[INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected new brain: PlayerBehavior?team=0 Traceback (most recent call last): File "/opt/conda/bin/mlagents-learn", line 8, in sys.exit(main()) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/learn.py", line 260, in main run_cli(parse_command_line()) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/learn.py", line 256, in run_cli run_training(run_seed, options, num_areas) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/learn.py", line 132, in run_training tc.start_learning(env_manager) File "/opt/mlagents_envs/mlagents_envs/timers.py", line 305, in wrapped return func(*args, *kwargs) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/trainer_controller.py", line 173, in start_learning self._reset_env(env_manager) File "/opt/mlagents_envs/mlagents_envs/timers.py", line 305, in wrapped return func(args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/trainer_controller.py", line 107, in _reset_env self._register_new_behaviors(env_manager, env_manager.first_step_infos) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/trainer_controller.py", line 268, in _register_new_behaviors self._create_trainers_and_managers(env_manager, new_behavior_ids) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/trainer_controller.py", line 166, in _create_trainers_and_managers self._create_trainer_and_manager(env_manager, behavior_id) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/trainer_controller.py", line 125, in _create_trainer_and_manager trainer = self.trainer_factory.generate(brain_name) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/trainer/trainer_factory.py", line 59, in generate trainer_settings = self.trainer_config[behavior_name] File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/settings.py", line 754, in missing f"The behavior name {key} has not been specified in the trainer configuration. " mlagents.trainers.exception.TrainerConfigError: The behavior name PlayerBehavior has not been specified in the trainer configuration. Please add an entry in the configuration file for PlayerBehavior, or set default_settings.

Do you have any suggestions for solving this error? Do I build it incorrectly by Unity Editor? Or anything missing in some of config files?

this is how I build the project

jiabinfan commented 2 months ago

Thank you very much for your help. I have a much better understanding of this project. However, I still have trouble running PCGRL/run_experiment.sh [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected new brain: PlayerBehavior?team=0 [INFO] Connected new brain: PlayerBehavior?team=0 Traceback (most recent call last): File "/opt/conda/bin/mlagents-learn", line 8, in sys.exit(main()) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/learn.py", line 260, in main run_cli(parse_command_line()) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/learn.py", line 256, in run_cli run_training(run_seed, options, num_areas) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/learn.py", line 132, in run_training tc.start_learning(env_manager) File "/opt/mlagents_envs/mlagents_envs/timers.py", line 305, in wrapped return func(*args, *kwargs) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/trainer_controller.py", line 173, in start_learning self._reset_env(env_manager) File "/opt/mlagents_envs/mlagents_envs/timers.py", line 305, in wrapped return func(args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/trainer_controller.py", line 107, in _reset_env self._register_new_behaviors(env_manager, env_manager.first_step_infos) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/trainer_controller.py", line 268, in _register_new_behaviors self._create_trainers_and_managers(env_manager, new_behavior_ids) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/trainer_controller.py", line 166, in _create_trainers_and_managers self._create_trainer_and_manager(env_manager, behavior_id) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/trainer_controller.py", line 125, in _create_trainer_and_manager trainer = self.trainer_factory.generate(brain_name) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/trainer/trainer_factory.py", line 59, in generate trainer_settings = self.trainer_config[behavior_name] File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/settings.py", line 754, in missing f"The behavior name {key} has not been specified in the trainer configuration. " mlagents.trainers.exception.TrainerConfigError: The behavior name PlayerBehavior has not been specified in the trainer configuration. Please add an entry in the configuration file for PlayerBehavior, or set default_settings.

Do you have any suggestions for solving this? I am not sure if I built WeChatff66b168e15c5896ac34b0845d46bead it correctly or any missing in config files. this is how I build the project

bic4907 commented 2 months ago

Sorry for confusing on build the game client. Here's the solution.

You have to only check the MMORPG_trainingPCG scene on the 'Build Settings' window. The MA scene uses 'PlayerBehavior' as agent name and the PCG scene uses 'PCGAgent', respectively. If you find the 'PlayerBehavior' agent name on running the mlagents-learn, the scene is built in wrong way. So, you have to build only the 'trainingPCG' scene to run the PCG experiment.

mlagents.trainers.exception.TrainerConfigError: The behavior name PlayerBehavior has not been specified in the trainer configuration. Please add an entry in the configuration file for PlayerBehavior, or set default_settings.

This error may occurs if the default_settings: null in the yaml file. (Note this yaml file) Just leave the default_settings as null and check if the behavior name of the GameObject of "PCGAgent" is "PCGBehavior". Here is the example:

photo_2024-04-23 11 01 28

If you properly set the name, you can see the behavior name in the command line when you run the experiment

[INFO] Connected new brain: PCGBehavior?team=0
[INFO] Connected new brain: PCGBehavior?team=0
[INFO] Connected new brain: PCGBehavior?team=0
...

Please try this solution an let me know if there are additional issues.

jiabinfan commented 2 months ago

Thank you very much for your previous suggestions and clear explanation. I tried to build only in scenes of trainingPCG and the previous error did not occur this time. However, I cannot even see "[INFO] Connected new brain: PCGBehavior?team=0" I think the Behavior Parameters I saw are very similar to yours.

image

Now I am getting errors like this. Is it still a build error?

What I am doing is

  1. git clone your code.

  2. I added this RaidEnv directory to Unity projects.

    image
  3. I installed these two packages provided by your code

    image
  4. then I built in this way

    image
  5. I uploaded everything to a remote linux server and run PCGRL/run_experiment.sh

[INFO] Connected to Unity environment with package version 2.2.1-exp.1 and communication version 1.5.0 [ERROR] SubprocessEnvManager had workers that didn't signal shutdown [ERROR] A SubprocessEnvManager worker did not shut down correctly so it was forcefully terminated. ..... [ERROR] A SubprocessEnvManager worker did not shut down correctly so it was forcefully terminated. Traceback (most recent call last): File "/opt/conda/bin/mlagents-learn", line 8, in sys.exit(main()) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/learn.py", line 260, in main run_cli(parse_command_line()) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/learn.py", line 256, in run_cli run_training(run_seed, options, num_areas) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/learn.py", line 132, in run_training tc.start_learning(env_manager) File "/opt/mlagents_envs/mlagents_envs/timers.py", line 305, in wrapped return func(*args, *kwargs) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/trainer_controller.py", line 173, in start_learning self._reset_env(env_manager) File "/opt/mlagents_envs/mlagents_envs/timers.py", line 305, in wrapped return func(args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/trainer_controller.py", line 105, in _reset_env env_manager.reset(config=new_config) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/env_manager.py", line 68, in reset self.first_step_infos = self._reset_env(config) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/subprocess_env_manager.py", line 446, in _reset_env ew.previous_step = EnvironmentStep(ew.recv().payload, ew.worker_id, {}, {}) File "/opt/conda/lib/python3.7/site-packages/mlagents/trainers/subprocess_env_manager.py", line 101, in recv raise env_exception mlagents_envs.exception.UnityTimeOutException: The Unity environment took too long to respond. Make sure that : The environment does not need user interaction to launch The Agents' Behavior Parameters > Behavior Type is set to "Default" The environment and the Python interface have compatible versions. If you're running on a headless server without graphics support, turn off display by either passing --no-graphics option or build your Unity executable as server build.

bic4907 commented 2 months ago

Could you leave comment after trying these methods?

My team member also investigating if there is any error on the run-time. I'll let you know when the debugging is finished in few days. If the issue is regards to the environment bugs, I'll check whether I can give you the next version of RaidEnv.

Regards

jiabinfan commented 2 months ago

Thank you very much for your help. For the first method, raidenv:v2 is not public so I can not pull it. Maybe you can public this image so I can try it.

For the second method, I do not see a file similar to "Player-0.log" I guess I have not reached this step.

bic4907 commented 2 months ago

We have fixed the environment bugs in the main branch, please pull the branch and re-build the Unity project.

For the first method, I have uploaded the raidenv:v2 as the lastest version, so you can ignore the comment.

For second method, you can find the run_logs directory in the experiment directory which is mounted to the docker. In my case, I mount the /mnt/nas/MMORPG_test/PCG/PCGRL directory to the workspace with the argument -v /mnt/nas/MMORPG_test/PCG/PCGRL:/workspace/results. You can file the Player-*.log files with the number of the processes.

The content should be like this:

Mono path[0] = '/game/MMORPG_Data/Managed'
Mono config path = '/game/MMORPG_Data/MonoBleedingEdge/etc'
Preloaded 'lib_burst_generated.so'
Preloaded 'libgrpc_csharp_ext.x64.so'
Unable to load player prefs
Initialize engine version: 2020.3.25f1 (9b9180224418)
[Subsystems] Discovering subsystems at path /game/MMORPG_Data/UnitySubsystems
Forcing GfxDevice: Null
GfxDevice: creating device client; threaded=0
NullGfxDevice:
    Version:  NULL 1.0 [1.0]
    Renderer: Null Device
    Vendor:   Unity Technologies
Begin MonoManager ReloadAssembly
...
Fallback handler could not load library /game/MMORPG_Data/Mono/libSystem.dylib
Fallback handler could not load library /game/MMORPG_Data/Mono/libcoreclr.so
Fallback handler could not load library /game/MMORPG_Data/Mono/libcoreclr.so
Fallback handler could not load library /game/MMORPG_Data/Mono/libcoreclr.so
Fallback handler could not load library /game/MMORPG_Data/Mono/libSystem.dylib
Fallback handler could not load library /game/MMORPG_Data/Mono/libSystem.dylib.so
Fallback handler could not load library /game/MMORPG_Data/Mono/libSystem.dylib
[ParameterManagerSingleton]
pcgHeuristic : False
pcgSaveEpisodeLimit : 0
pcgRandom : False
pcgSaveCreatedSkill : True
runId : pcg_winRate-0.4-1.0
logPath : /workspace/results/pcg_winRate-0.4-1.0/
ERROR: Shader TextMeshPro/Sprite shader is not supported on this GPU (none of subshaders/fallbacks are suitable)
ERROR: Shader UI/Default shader is not supported on this GPU (none of subshaders/fallbacks are suitable)
Xshell_DEPbInzg9B explorer_YGbHfoz4Lf

Please re-build the game with the new source code and try to find the second method. If the log size overs 10kb with repeated errors, please leave comment with the log.

Wishes!