ray-project / ray

Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
https://ray.io
Apache License 2.0
34.04k stars 5.78k forks source link

Issue on page /rllib/index.html #25297

Closed ChristosPeridis closed 2 years ago

ChristosPeridis commented 2 years ago

Hello dear members of the Ray team,

I am an MSc student, and I am studying AI. I am trying to perform a comparison of the performance metrics of different well known DRL algorithms such as PPO, SAC, A2C etc. in a custom gym environment. I discovered rllib API and it seems to be the perfect API for the work I want to do. However, when I am trying to run an experiment using Tune.run and PPO as the algorithm of choice I am getting the following error:

TuneError: ('Trials did not complete', [PPO_CustomEnv-v0_a2286_00000])

This happens using any of the algorithms I want to conduct experiments with.

I also tried registering the environment to rllib using register_env() function and then using PPOTrainer for conducting some experiments, but this did not work as well.

Could you help me find out what the problem might be? What is the best way to intergrade custom envs to Ray rllib so I can run the experiments flawlessly?

Thank you very much in advance!

Kind regards,

Christos Peridis

tupui commented 2 years ago

Hi @ChristosPeridis, thank you for reporting. I will redirect you to either the slack channel or the discuss forum which are more appropriate to ask user related questions.

I will close the issue for now, but if after investigating and asking on the other forums there is indeed an issue, I would be happy to re-open this issue with further details.