Closed Royalvice closed 11 months ago
Thank you very much for your recognition of our work. OmniSafe currently supports custom environment. Specifically:
However, if you still have questions about the customization of OmniSafe's environment after reading the above information, or if you encounter unexpected errors during the process, please feel free to provide more detailed information so that we can better assist you in resolving your issues.
Thank you again. I currently have no more questions.
Thank you very much for your recognition of our work. OmniSafe currently supports custom environment. Specifically:
- We provide support for the custom environment MetaDrive in feat: support MetaDrive interface #263 , which can be a valuable reference for you.
- We have provided relevant instructions in the project's main README on our homepage.
- We have successfully resolved a similar issue in How to apply omisafe framework to a customized environment? #255 , and we hope you can find help from that.
However, if you still have questions about the customization of OmniSafe's environment after reading the above information, or if you encounter unexpected errors during the process, please feel free to provide more detailed information so that we can better assist you in resolving your issues.
How should I proceed if the action space of the environment I want to design is discrete, but Omnisafe only accepts continuous action spaces of the Box class?
Currently, OmniSafe does not support discrete action space environments. We would support it in the future version since the discrete environment also matters a lot in the SafeRL area.
Currently OmniSafe does not support discrete action space environment. We would support it in future version since discrete environment also matters a lot in SafeRL area. We feel sorry that OmniSafe do not meet your requirement currently.
I would like to quickly run the safe-rl algorithm in my personal environment. Is it feasible to discretize the actions directly in the 'step' function?
I look forward to your prompt reply.
If you directly discretize actions within the step
function, you need to consider whether the current algorithm supports discrete inputs. Currently, the algorithms we have implemented cannot handle discrete action inputs directly. This is mainly because most of the algorithm's original authors did not specify the algorithm's performance in a discrete environment. Supporting discrete action inputs is on our roadmap
, but it will be implemented in future versions.
If this feature is crucial for you, we would greatly appreciate your leadership in adding this feature to Omnisafe, and we would also welcome your participation in the development of this feature.
According to our original roadmap, we will have this feature updated by early October.
Thank you for your response. As you mentioned, discretizing the actions predicted by the actor and inputting them directly to the environment did not yield satisfactory results. This feature is indeed important to me, but as a newcomer to RL, I will do my best to contribute to Omnisafe if possible. Thank you again.
Feel free to open if you have further issue.
Required prerequisites
Questions
Thank you very much for your contribution to this valuable repository.
I would like to quickly utilize the efficient safe-rl algorithm implemented in this repository in my own environment. Specifically, I have created a custom Unmanned Aerial Vehicle communication environment from scratch, including custom state and action spaces, as well as a custom reward function. I would like to quickly convert my custom environment into the API format accepted by this repository, but I haven't found many tutorials on creating custom environments. Could you please advise me on how to proceed? Could you recommend any resources for me to refer to?
Once again, thank you for your efforts, and I look forward to your response. Thank you!