geek-ai / MAgent

A Platform for Many-Agent Reinforcement Learning
MIT License
1.68k stars 332 forks source link

What the turn action means? #45

Open laker-sprus opened 5 years ago

laker-sprus commented 5 years ago

Actions are discrete actions. They can be move, attack and turn. The move and attack action can be comprehend easily. However, what does turn means? I check the frames in render. I found the turn action correspond half-line. So I am confused with the turn action. Hope for your reply.

laker-sprus commented 5 years ago

ps. It seems Positive rewards(>0) are better than negative rewards(<0), is that right? In training process, which is more important? The rewards or the penalties?

merrymercy commented 5 years ago

By default direction system is disabled because we found it is hard for agents to learn. https://github.com/geek-ai/MAgent/blob/92256aa44669a7cef66e942c464cfe289f1dcba6/src/gridworld/GridWorld.cc#L21

Originally we designed our agent with a direction and a sector view range. But we found agents cannot learn how to deal with directions (You know in a map, it is even hard for a human being.) So in all our examples, direction and turn actions are disabled. Every agent faces to north and has a circular range.

laker-sprus commented 5 years ago

Thank you for your reply. It seems that the framework defines Agent's 12 move actions in a world coordinate, which is enough to use. If a agent has a view range , how can I obtain the ID of agents within the view range of this agent? It seems env.get_observation() do not have this information.

merrymercy commented 5 years ago

You can use https://github.com/geek-ai/MAgent/blob/92256aa44669a7cef66e942c464cfe289f1dcba6/examples/train_battle.py#L64-L65

laker-sprus commented 5 years ago

Thanks for your reply. However, obs[i] = env.get_observation(handles[i]) is something like array([[ 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,

  1. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,
  2. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,
  3. , 0. , 0. , 0. , 1. , 0. , 0. , -0.105, 0.408, 0.456], , which seem do not have ID information. ids[i] = env.get_agent_id(handles[i]): ('id: ', array([25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49], dtype=int32)) , which is all agent's ID. Therefore, env.get_observation and env.get_agent_id can not help to find the IDs that in view range. Do you have a better way?
laker-sprus commented 5 years ago

How can I obtain the velocity information of one agent?

laker-sprus commented 5 years ago

Where can I add( or reduce or modified) the original actions in code? Hope your reply for the above questions. Thank you very much.

merrymercy commented 5 years ago

I can only point some related code to you. You have to modify c++ code. Hopefully you are okay with c++.

Another method is that we can get positions of all agents by https://github.com/geek-ai/MAgent/blob/92256aa44669a7cef66e942c464cfe289f1dcba6/python/magent/gridworld.py#L361-L368. And then calculate in python, i.e. calculate the distance pairs and compare them with the radius of view range)

laker-sprus commented 5 years ago

Thanks for your reply. 1.I have already calculated the distance pairs and compared them with the radius of view range. I just concern the compute efficiency issue. 2.I notice here: speed = 1.0.However, it is the setting speed not the current speed. For example, when a agent hit a wall or other agents, its speed may turn to zero. Furthermore, when a agent hit the right-side wall and the action order is still towards to the right direction, its speed will also be zero. The current speed information seem to be ignored. It might be important to know whether a agent become static or not. 3.C++ is fine, In temp_c_booster.cc, function get_action: int get_action(const std::pair<int, int> &disp, bool stride) { int action = -1; if (disp.first < 0) { if (disp.second < 0) { action = 1; } else if (disp.second == 0) { action = stride ? 0 : 2; } else { action = 3; } } else if (disp.first == 0) { if (disp.second < 0) { action = stride ? 4 : 5; } else if (disp.second == 0) { action = 6; } else { action = stride ? 8 : 7; } } else { if (disp.second < 0) { action = 9; } else if (disp.second == 0) { action = stride ? 12 : 10; } else { action = 11; } } return action; }

seems only have move actions. I still have not found the attack action. Can someone turnoff the attack actions? 4.Have MAgent concern the wall hitting penality? How can I obtain it? Thanks for your patience.

laker-sprus commented 5 years ago

Well, I have found the action space layout. In AgentType.cc Line 118: int n_action = attack_base + attack_range->get_count(); for (int i = 0; i < n_action; i++) { action_space.push_back(i); } Cause n_action=21, so there are (0,1,2,...20) 21 actions in total.