Closed dkal3 closed 4 years ago
Hi!
Let's see if I can't shed some light.
As you've noted, every pedestrian model is responsible for computing velocity. That velocity is integrated (using very simple, first-order, explicit forward integration) in BaseAgent::update()
.
Now, I don't fully understand the exact semantics of your buffer. It sounds a bit like cellular automata -- e.g., an agent has a current position and can move in four (or eight) directions in a single time step. Yours may be more complex than that, but it's what came to mind. So, with that idea in mind, let me explain how I would implement it.
Simple.
One of the challenges is you'd have to have a data structure that represents your environment -- where are the obstacles, what about two agents who want to go to the same place. If the agents can't decide independently where they want to move -- if they have to coordinate in some way, then I'd put all of that logic into a Task
. That task would evaluate once per time step and define, in a coordinated manner, all of the next positions of the agents. Then each agent would independently look up that solution in the table populated by the Task
. I've been interested in introducing some pedestrian models using continuum mechanics -- I was considering using that kind of approach (solve in a task, apply the results during agent update).
Does that make sense?
Actually I study an algorithm called IMADDPG9(Improved multi-agent deep deterministic policy gradient) for path planning, which is based on Deep Reinforcement Learning-agents learn to reach their goals from reward and punishment. This is a part from tha path planning algorithm.
From the code you can see only the buffer, where the sequential positions of an agent are stored.
I would like to know if in your opinion,can this new model be integrated in Menge.
Actually I study an algorithm called IMADDPG (improved multi agent deep deterministic policy gradient) for path planning , which is based on Deep Reinforcement learning - agents learn to reach their goals from reward and punishment. This is a part from the path planning algorithm.
From the code you can see only the buffer, where the sequential positions of an agent are stored.
I would like to know if in your opinion can this new model be integrated in Menge.
Στις Τετ, 26 Φεβ 2020, 3:30 π.μ. ο χρήστης Sean Curtis < notifications@github.com> έγραψε:
Hi!
Let's see if I can't shed some light.
As you've noted, every pedestrian model is responsible for computing velocity. That velocity is integrated (using very simple, first-order, explicit forward integration) in BaseAgent::update() https://github.com/MengeCrowdSim/Menge/blob/master/src/Menge/MengeCore/Agents/BaseAgent.cpp#L80-L100 .
Now, I don't fully understand the exact semantics of your buffer. It sounds a bit like cellular automata -- e.g., an agent has a current position and can move in four (or eight) directions in a single time step. Yours may be more complex than that, but it's what came to mind. So, with that idea in mind, let me explain how I would implement it.
- Given the current position, select the next position you want the agent in.
- Grab the simulator time step and compute a velocity that is the displacement of current position to target position, divided by that time step.
Simple.
One of the challenges is you'd have to have a data structure that represents your environment -- where are the obstacles, what about two agents who want to go to the same place. If the agents can't decide independently where they want to move -- if they have to coordinate in some way, then I'd put all of that logic into a Task. That task would evaluate once per time step and define, in a coordinated manner, all of the next positions of the agents. Then each agent would independently look up that solution in the table populated by the Task. I've been interested in introducing some pedestrian models using continuum mechanics -- I was considering using that kind of approach (solve in a task, apply the results during agent update).
Does that make sense?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/MengeCrowdSim/Menge/issues/147?email_source=notifications&email_token=AKXM5SC6CGTTRTEIQTVHZCTREXA3XA5CNFSM4K3WJVF2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEM6NFIY#issuecomment-591188643, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKXM5SGG4A62RUEFFX5RXF3REXA3XANCNFSM4K3WJVFQ .
Could you attach a copy of the paper to this issue? I looked at the pseudo-algorithm and read (what I believe is) the abstract of the paper on line, but I don't have access to the full paper. That would better enable me to make suggestions.
Of course! I would be grateful!
I've read through the paper and while there are aspects of the paper I skimmed over, I have a sense of the approach. Some clarifying questions:
Generally, though, based on the eventual clear understanding of the leader agent path, I think things would easily be mapped to Menge.
Τhank you very much for your reply. The truth is that I'm going to start implementing it now so I don't know some details. I'll contact you again if it is necessary. Thank you so much for everything!
If you want some more direct help, you can email a link to your branch to menge@cs.unc.edu, and I can pull your branch and help direct your efforts.
Ok!!Thank you very very much!!! Have a nice day!!!
Hello!I hope that you are fine!
As it is known Menge is based on ORCA which is a velocity based model, i.e every time step we have a feasible velocity and from this velocity each agent updates it's position ( computeNewVelocity() function). I would be appreciate if you could inform me, where is the connection between the new velocity and the simulator takes place? I mean, as well I have the new_velocity computed by ORCA, where is this velocity gets to the simulator and updates the position?
If I want to add in Menge a new model that it is not velocity-based, but has a buffer with the agent positions and from there the agent can move?Is it possible?
Thank you in advance!!!