Open ammohamedds opened 2 years ago
Hello @ammohamedds, thanks for the feedback.
To get started, use the scripts in /examples.
The scripts in /experiments are not maintained. Most of them work but others you need to update hard-coded file-paths etc. Especially /experiments/simulations rely on hardcoded paths. This directory contains scripts for simulating the game with already trained policies. Thus, it uses paths to pre-trained policies that must be loaded prior to the simulation. An example training script that you can use is: https://github.com/Limmen/gym-idsgame/blob/master/experiments/training/v21/minimal_defense/ppo_openai/run.py. Note: after you run the script, the configuration is cached in a file called "config.json" if you want to change the configuration you should delete this file and it will be re-generated.
Again, it is not recommended to use the experiments folder directly. The experiments folder can be used as inspiration for writing your own experiments but then you have to change the file-paths etc to make it work. If you want to get started you should use the scripts in /examples, e.g.: https://github.com/Limmen/gym-idsgame/blob/master/examples/ppo.py I just tested this script and it works. I will also help you fix any errors you encounter if you use the scripts in /examples, I will not do the same in /experiments.
See https://github.com/Limmen/gym-idsgame/blob/master/examples/ppo.py for example. /Kim
Hello, thanks for your interesting tool. I have a few points that need clarification:
First: I got the error below while trying to run experiments in simulations Traceback (most recent call last): File "/idsgame/gym-idsgame/experiments/simulations/v0/attack_maximal_vs_defend_minimal/run.py", line 81, in
util.create_artefact_dirs(config.output_dir)
TypeError: create_artefact_dirs() missing 1 required positional argument: 'random_seed'
Process finished with exit code 1
Second: I also got the error below while trying to run experiments in training (idsgame) Workstation-PC:~/idsgame/gym-idsgame/experiments/training/v7/two_agents/actor_critic$ sudo ./run.sh File "run.py", line 21 def default_output_dir() -> str: ^ SyntaxError: invalid syntax
Third: Where are the experiences through which agents can be trained by PPO (mentioned in the paper)? In examples, PPO is not working, but tabular_q_learning.py is working