g4v4g4i / ArgABM

An agent-based model for scientific inquiry based on abstract argumentation
10 stars 4 forks source link

Successful convergence as an abort criterion #51

Closed dunjaseselja closed 6 years ago

dunjaseselja commented 6 years ago

In order to use the model to examine the case of peptic ulcer disease (PUD), we need to introduce an alternative abort criterion, which allows agents to (eventually) all converge on the best theory. One option to do this would be to introduce a switch that initiates the following procedure: As soon as some agents fully explore one of the theories and thus become "idle" (current abort criterion), the communication takes a different form. From this point on, these agents represent scientists who have completed their research of the given theory and moved on to other topics. These agents will then share a random set of fully explored arguments of their current theory (this represents their publications that are publicly available for other scientists), while other agents will share information in a usual way. More precisely: Once the current abort criterion is triggered the following procedure takes place:

The idea behind this distinction is that wrongly held consensus means agents will need more time to learn their mistake than if some scientists were actually researching rivaling theories.

daimpi commented 6 years ago

A few clarifications and questions:

Once the current abort criterion is triggered the following procedure takes place:

  • if agents aren't all on one theory:
    • agents on the fully explored theory share a random set of arguments from the objective landscape

Let's call this Case 1: Partially Stuck: They only share arguments from their current theory (i.e. the fully explored one) and they share those as if they were standing on this argument in the objective landscape at this point in time (i.e. as a red argument with all links and child/mother arguments)

  • if all agents are on one (wrong) theory: they receive small bits of information (e.g. a single argument or an attack from other theories) at a time from the objective landscape.

Let's call this Case 2: Wrong Consensus: They all receive the same argument XOR or attack once a month (= every 30 ticks) from another theory. This argument then turns red in the objective landscape (if it isn't already red) i.e. henceforth it will be fully discovered in the objective landscape (which includes the discovery of it’s attacks and mother/child arguments in the objective landscape). As soon as they are no longer converged on the wrong theory this trickle-down information is discontinued and agents share information as described above in “Case 1: Partially Stuck”.

The new exit-condition will be: convergence on the best theory.

Questions:

When answering all those questions we probably want to take into consideration:

dunjaseselja commented 6 years ago
  • are the above clarifications correct?

Yes

  • Should the procedure for Case 1 be discontinued while we’re in Case 2?

Yes

  • in our current exit-condition we give the agents a final chance to switch theories where we ignore the jump-threshold. Is this something we want to keep happening before the new procedure kicks in?

I'd say: no, since they now have more time to make an evaluation

  • for Case 2: wrong consensus:
    • how exactly do we want the arguments from the other theories to be selected? Some options would be: fully random, only random from those not yet discovered in the objective landscape etc… . Alternatively we could also make the selection more structured e.g. starting from the root and then proceeding outwards. The latter approach would have the effect (/advantage?) that agents could not walk on arguments whose grandmother argument is undiscovered.
    • How exactly do we want the attacks from/to the other theories to be selected? (Similar considerations as for the last point).
    • If we use random draws: should they be with or without replacement?
  • An additional option would be instead of turning those arguments red in the objective landscape, to just turn them turquoise. This would prevent agents from walking on them w/o first exploring them properly (this prevents agents from walking on arguments whose (grand)mother-argument is not yet discovered).

I suggest making them turquoise and random: this simulates a random appearance of anomalies in their own theory (rather than actual exploration of the alternative theory). They learn either an argument or an attack+argument, with replacement.

daimpi commented 6 years ago

Addendum: Case 2: Wrong Consensus procedures also takes effect if agents are distributed on two or more theories which are fully explored (=red) in the objective landscape. Furthermore: In the Case 2: Wrong Consensus procedures agents are always guranteed to learn about the best theory in the sense that the set of attacks and arguments from which they randomly learn one element always includes attacks and arguments belonging to the best theory.

daimpi commented 6 years ago

Additional question regarding Case 2: When agents learn an argument (either via learning a random attack or via learning an random argument) which is not gray or turquoise in the objective landscape, should agents learn the argument…

  1. …in the state it actually exhibits in the objective landscape (e.g. yellow) or
  2. …as turquoise?

I'd slightly tend towards 1. but I'm very much open to either implementation.