Gor-Ren / gym-jsbsim

A reinforcement learning environment for aircraft control using the JSBSim flight dynamics model
MIT License
178 stars 87 forks source link

Make JSBSim directory configurable #4

Open Gor-Ren opened 5 years ago

Gor-Ren commented 5 years ago

There is a magic string specific to my personal JSBSim install directory during development:

https://github.com/Gor-Ren/gym-jsbsim/blob/02b1882c0c1950d288b6a38d0b4f6e679c754132/gym_jsbsim/simulation.py#L15

Make this configurable, e.g. through some kind of ini file that gets read.

GermanInfinity commented 3 years ago

I'm having issues making the environment with env = gym.make('JSBSim-TurnHeadingControl-Cessna172P-SHAPING.STANDARD-NoFG-v0') I get an error saying there are no registered environments with that ID. Could this be a possible fix?

Gor-Ren commented 3 years ago

@GermanInfinity nah it shouldn't do, it will likely be a problem in the ID string.

Try the solution here to list all environment IDs and copy-paste the one you want: https://stackoverflow.com/questions/48980368/list-all-environment-id-in-openai-gym

GermanInfinity commented 3 years ago

Oh wow, thank you so much for your quick response; that actually worked. Now the code attempts to look for your own root directory: it says this OSError: Can't find root directory: /home/gordon/apps/jsbsim

I have opened up my gym-jsbsim/gym_jsbsim/simulation.py and can change this ROOT_DIRECTORY. But, I am not sure what to change it too. I changed it to the location of the site-package; but the code cannot find airplane models there as it is looking for this file: /aircraft/A320/A320.xml

_JSBSim failed to open the configuration file: Path "/Users/Chioma_N/Desktop/ML/ML/lib/python3.7/site-packages/gymjsbsim/aircraft/A320/A320.xml"

I am not exactly sure where that xml file is location.

Please could you clarify, thank you!

GermanInfinity commented 3 years ago

Oh I see, it's from the actual JSBSIM package. Let me try that!

GermanInfinity commented 3 years ago

Thanks, I got it to work! Please, I'm fairly new to RL; could you kindly confirm one more thing: what is the size of the np.array for the action input? And is it possible to provide the optimal working agent for this task; just to observe the outputs of the RL system and the inputs from the agent!

GermanInfinity commented 3 years ago

Thank you for your work, it's awesome. I am also working on my MSc dissertation, and my plan is to build a verified neural network that models the behaviour of a trained agent in this RL use case.

Gor-Ren commented 3 years ago

Action inputs are of size 3; please see the README.

Sorry, I don't have any trained agents - I deleted my old results. The environment is going to be too complex to have a formally optimal agent, either.

Good luck with your dissertation! :)

GermanInfinity commented 3 years ago

Thanks a lot!

GermanInfinity commented 3 years ago

Hey Gordon, I was able to train an agent! yay. However the render is not so great. Are there any other visualization options? Can we get that plane video you have on the google drive...