Open why0504 opened 1 year ago
See #22
Hi, I've seen https://github.com/Shuijing725/CrowdNav_DSRNN/issues/22 If I want to train two robots, then after adding the second robot instance, as follows: Under the two programs crowd_sim/envs/crowd_sim.py and crowd_sim/envs/crowd_sim_dict.py
rob_RL = Robot(config, 'robot') rob2_RL = Robot(config,'robot2') self.set_robot(rob_RL) self.set_robot(rob2_RL)
ob['robot_node2'] = self.robot2.get_full_state_list_noV()
I assume that both robots use the same reinforcement learning strategy, and the episode ends with both robots reaching the same target point. Then, referring to the training of the first robot, what procedures do I need to modify in the project file? Do I just need to modify the environment's code? For example, the related functions in crowd_sim_dict.py. Secondly, does the render() function in crowd_sim.py also need to be modified? Do I need to add code to train.py? Sorry to bother you. Thank you so much!
Yes, besides the gym environment, multiple modifications are needed. For example, you probably need to modify the main scripts including train.py
and test.py
.
Also, the code for network and ppo in pytorchBaselines folder needs to be adapted. For example, dsrnn network and RL replay buffer are only designed for single robot scenarios.
Besides changing our repo, another way to achieve your purpose is to look at open-source implementations of other multi-agent social navigation papers. For example, you can search for works that used environments like this one.
hi, I like your work very much, I currently have an idea to do collaborative tasks among multiple robot agents.So I would like to ask you, how to implement multiple robot agents in the environment based on this github repository? Do you have any suggestions for this?