SJWang2015 / AEMCARL

Reinforcement Learning Based Collision Avoidance with Adaptive Environment Modeling for Crowded Scenes
MIT License
31 stars 10 forks source link

Code for AEMCARL #7

Open Sahil177 opened 1 year ago

Sahil177 commented 1 year ago

Dear owners,

As far as I can tell it doesnt seem like the code for your proposed AEMCARL solution is included in this repo. Would it be possible to add this?

Many thanks,

Sahil

SJWang2015 commented 1 year ago

The location of AEMCARL code at the line 90 of "AEMCARL/crowd_nav/common/qnetwork_factory.py "

Sahil177 commented 1 year ago

Ah I see thanks, so the train command in the get started section does in fact train your AEMCARL network since the test policy flag is set to 5?

Sahil177 commented 1 year ago

Another question, what test case/sim setup did you use to produce the video on the read me of the repo with many agents in crowd_sim? Also how did you get the extra graphic that shows how many GRU units are in use etc. Thanks!

SJWang2015 commented 1 year ago

The command "python test.py --policy actenvcarl --test_policy_flag 5 --multi_process self_attention --agent_timestep 0.4 --human_timestep 0.5 --reward_increment 2.0 --position_variance 2.0 --direction_variance 2.0 --model_dir data/output --phase test --visualize --test_case 0", is used to configure the test case. The visualization of the GRU code at the line 777 of "https://github.com/SJWang2015/AEMCARL/blob/main/crowd_sim/envs/crowd_sim_video.py"

Sahil177 commented 1 year ago

Thanks, this test command only produces 5 agents, the sim has 17 i think and the motion patterns are different to test case 0. I tried the same command with --human_num 17 added but the motion pattern is different to the one shown in the video.

SJWang2015 commented 1 year ago

You can change the term "human_num" at the https://github.com/SJWang2015/AEMCARL/blob/202f3f1ce67f24a7b29276496ee0ed4f91683f96/crowd_nav/configs/env.config