Closed Lilyoung2000 closed 1 year ago
The visualization code is available in this file: https://github.com/mit-acl/gym-collision-avoidance/blob/release/gym_collision_avoidance/envs/visualize.py
The training code is available here: https://github.com/mit-acl/rl_collision_avoidance
Thank you for your reply With your help, I tried to run some cases, but I found that when the agent =10, it is different from the one in your paper. I have observed carefully. Is this result caused by the size of the radius? Or is it the destination? The location has changed The following are the results of GA3C-CADRL_10agents
I think there's a "small test suite" experiment configuration that contains the settings for the scenarios in the paper. Those should specify the (start, goal, radius, pref speed) of each agent.
for instance, this config has the settings for the small test suite: https://github.com/mit-acl/gym-collision-avoidance/blob/903564097509e3fbdbbb850a3a89729a28377b81/gym_collision_avoidance/envs/config.py#L227
and this experiment script could be changed to use that small config: https://github.com/mit-acl/gym-collision-avoidance/blob/release/gym_collision_avoidance/experiments/src/run_full_test_suite.py
Your suggestion is useful, thank you from the bottom of my heart. What do you think if I add multi-head attention mechanism in GA3C-CADRL is a good idea? I would like to take the liberty to ask, will you continue to engage in research in this area (multi-agent obstacle avoidance) in the future, or do you say that the work in this area is already perfect and does not need to be carried out. Good luck with your work
there is a lot of research still to be done in this area
Hello, Mr. Everet I would like to know where is the visualization script in the current project so that I can change colors etc. Also, is there no python file in the current project that I can use directly for training? I saw the instruction to train myself a new spolicy in your description, but I just started college and I don't have the ability to program it myself, maybe you can give me some advice on how to train myself a new strategy. Looking forward to your reply, thanks.