droftware / Medusa

Simulation of hide and seek game
BSD 2-Clause "Simplified" License
2 stars 1 forks source link

Tweak bayesian network and associated probability values #4

Open droftware opened 7 years ago

droftware commented 7 years ago

Though the path planning and bayesian controller have been integrated, the agent is still not respecting the path i.e. it's not traversing to its designated goal position due to intermingling of various probabilities of many random variables such as visibility, obstruction etc.

The bayesian network as well the probability tables of the associated random variables need to be tweaked so as to enable the agent in respecting its goal destination of its formulated path.

droftware commented 7 years ago

Observation 1: Target random variable is clearly being over shadowed by Obstruction and Visibility.

Consider the last direction where Obst:3, Vis:1 and Target:F , the corresponding posterior probability is 0.232 . Now consider the direction where Target:T, the values are 0.006 (where Obs:1, Vis:2) and 0.0003(where Obs:0, Vis:0). These values are no match to the posterior direction values where Obs or Vis is >= 2.

All random vars set ('Target:', ['F', 'F', 'F', 'F', 'F', 'F', 'F', 'F', 'F', 'F', 'F', 'T', 'T', 'F', 'F', 'F', 'F']) ('Danger:', ['0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0']) ('Obstruction:', ['3', '3', '0', '3', '2', '2', '2', '3', '3', '0', '0', '0', '1', '1', '1', '0', '3']) ('Visibility:', ['0', '0', '0', '0', '1', '1', '1', '0', '0', '0', '0', '0', '2', '2', '2', '0', '1']) ('Hider:', ['0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0']) ('Seeker:', ['0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0']) ('Blockage:', ['0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0']) [0.15699839127296644, 0.15699839127296644, 0.00025993939072049556, 0.15699839127296644, 0.0046280652395040891, 0.0046280652395040891, 0.0046280652395040891, 0.15699839127296644, 0.1140213695038836, 7.6297964582254014e-05, 1.986822018648759e-05, 0.00037123910581614023, 0.0062351650998350387, 0.0013643480494011468, 0.0028666336809768404, 0.00025975764832106074, 0.23264762052589899]

droftware commented 7 years ago

One of the ways to circumvent the problem caused due to the above Observation 1 is to have three major behavioural types for the hider. Each behavioural type will be defined by its own customised bayesian network having a subset of the existing random variables.

The three behavioural types will be:

droftware commented 7 years ago

Sidenote: Changing the above behavioural types of the agents as well as giving instructions regarding movement to other regions will be handled by a higher level decision making layer. This higher level layer will be controlled by a few commanding agents and not all. Under each commanding agent there will be some agents assigned and it will be the duty of the commanding agent to assign and change behaviours as well as give movement related instructions depending on the their higher level view of the game.

droftware commented 7 years ago

Postponing this issue The exact probability vales for the prior and conditional distributions can only be decided by analysing the simulation when the higher level decision making layers are coded and integrate with the lower level bayesian nets.