Open laker-sprus opened 5 years ago
ps. It seems Positive rewards(>0) are better than negative rewards(<0), is that right? In training process, which is more important? The rewards or the penalties?
By default direction system is disabled because we found it is hard for agents to learn. https://github.com/geek-ai/MAgent/blob/92256aa44669a7cef66e942c464cfe289f1dcba6/src/gridworld/GridWorld.cc#L21
Originally we designed our agent with a direction and a sector view range. But we found agents cannot learn how to deal with directions (You know in a map, it is even hard for a human being.) So in all our examples, direction and turn actions are disabled. Every agent faces to north and has a circular range.
Thank you for your reply. It seems that the framework defines Agent's 12 move actions in a world coordinate, which is enough to use. If a agent has a view range , how can I obtain the ID of agents within the view range of this agent? It seems env.get_observation() do not have this information.
Thanks for your reply. However, obs[i] = env.get_observation(handles[i]) is something like array([[ 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,
How can I obtain the velocity information of one agent?
Where can I add( or reduce or modified) the original actions in code? Hope your reply for the above questions. Thank you very much.
I can only point some related code to you. You have to modify c++ code. Hopefully you are okay with c++.
Another method is that we can get positions of all agents by https://github.com/geek-ai/MAgent/blob/92256aa44669a7cef66e942c464cfe289f1dcba6/python/magent/gridworld.py#L361-L368. And then calculate in python, i.e. calculate the distance pairs and compare them with the radius of view range)
Thanks for your reply. 1.I have already calculated the distance pairs and compared them with the radius of view range. I just concern the compute efficiency issue. 2.I notice here: speed = 1.0.However, it is the setting speed not the current speed. For example, when a agent hit a wall or other agents, its speed may turn to zero. Furthermore, when a agent hit the right-side wall and the action order is still towards to the right direction, its speed will also be zero. The current speed information seem to be ignored. It might be important to know whether a agent become static or not. 3.C++ is fine, In temp_c_booster.cc, function get_action: int get_action(const std::pair<int, int> &disp, bool stride) { int action = -1; if (disp.first < 0) { if (disp.second < 0) { action = 1; } else if (disp.second == 0) { action = stride ? 0 : 2; } else { action = 3; } } else if (disp.first == 0) { if (disp.second < 0) { action = stride ? 4 : 5; } else if (disp.second == 0) { action = 6; } else { action = stride ? 8 : 7; } } else { if (disp.second < 0) { action = 9; } else if (disp.second == 0) { action = stride ? 12 : 10; } else { action = 11; } } return action; }
seems only have move actions. I still have not found the attack action. Can someone turnoff the attack actions? 4.Have MAgent concern the wall hitting penality? How can I obtain it? Thanks for your patience.
Well, I have found the action space layout. In AgentType.cc Line 118: int n_action = attack_base + attack_range->get_count(); for (int i = 0; i < n_action; i++) { action_space.push_back(i); } Cause n_action=21, so there are (0,1,2,...20) 21 actions in total.
Actions are discrete actions. They can be move, attack and turn. The move and attack action can be comprehend easily. However, what does turn means? I check the frames in render. I found the turn action correspond half-line. So I am confused with the turn action. Hope for your reply.