Open dunjaseselja opened 8 years ago
You're very much right about the premise that researchers are rarely able to find a defender-child. I've recently ran some tests with markers for when that happened and this happened at max a few times during a whole run and in many runs it just didn't happen at all. Nevertheless imho giving them inside into all arguments for potential defenders sounds a bit too strong. What do you think about some middle-ground solution: giving them prospective insight into all arguments with group members on? That would even have a nice real world interpretation I think :). Maybe we could implement some tests how that would look like to check whether we're happy with the outcome.
Thanks for running those tests, that's really good info.The idea sounds good (this was indeed one of the options we had in mind back then as one possible solution). If you can run a test for how often that would happen, so that we know whether this solution increases the agents' heuristic behavior that would certainly be helpful! Then we can decide if we should implement that solution or some other.
Just as a clarification on my comment above: It happens rarely that they are successful in their search for a defender child, but it still happens quite frequently that they are stopping to search for a defender child i.e. there are behavioral ramifications from the prospective movement nevertheless.
Regarding your comment: Yes I can certainly look into that. Do we have a timeline on that?
Yes, that's how I understood it! We haven't set any deadline for this since it wasn't clear how difficult it would be to improve this thing, but if it's relatively easy then it would be very good to see what's happening if the agents actually do find defense of their arguments.
Ok I've run some tests and the results are in some regard surprising and in others not: I've compared four criteria:
In all those cases the standard restrictions o/c still apply i.e. the argument must be non (red, gray, turquoise), it mustn't have a group member on it and it must be from the same theory as the current researcher.
What is not surprising is that the more relaxed a criterion is the more the researchers will actually be able to find a defender. What I find surprising though is the large number of cases in which they fail to find a defender even in the most permissible case "all-visible". This is even more surprising considering that this data is only for the objective best theory and were generated with the max number of researchers (100):
Tukey boxplots without outliers. Two and three theories, 100 researchers, reliable, heterogeneous groups, complete networks, 1000 runs each. There was no difference between two and three theories
The first plot shows the ratio i.e. (number of successful prospective moves / number of wanting to move prospectively but not finding a defender)
The 2nd plot shows the absolute number of successful prospective moves. Keep in mind that this data is for the objective best theory only.
The summary statistics can be found here: https://gist.github.com/daimpi/432bffc99ad2b92b64826d38ceea19b2
The whole results including the raw .csv
data can be found in \Dropbox\Agent-based models\Results\Zollman-networks\20 02 2017 Prospective-movement-tests
Very nice! We should probably think about ways to improve the success rate of the prospective behavior.
This is indeed very useful info. I think for now, given this data, we should maybe switch to the option where agents search for a defense in all the visible arguments of the given theory (so the option where they perform the best). I think it's not surprising why they don't perform better: most of the time the defense simply won't be among the discovered arguments (and later on it will be discovered anyway).
For future versions of the model, we could maybe make a more structured landscape where defense is more likely to come from some parts of the graph (e.g. from the children of the given argument) than from anywhere else.
As a side note: There is currently no guaranteed persistence that they will stay on a defender to actually discover the counter-attack. I think we already talked about this once, but probably it's also not too big of a deal as most arguments on our graph — and therefore also most defenders — are leaves, due to it's tree structure. On the leaves researchers will stay until the argument becomes fully researched anyway. As for the other cases they could potentially move away, but that's far from guaranteed as they move every tick with the small move-probability (~ 8 %) and only every 5th tick with the full move prob (~40%) and thats already presupposing that they can move at all (i.e. no group member blocking, existence of a discovered child argu,…).
The new data on the prospective movement comparing "all-subj-argu" with "all-visible" has been processed, here are some results:
performance-monist for different number of theories and network structures comparing "all-subj-argu" criterion with "all visible". Each bar aggregates 10, 40 & 100 scientists with 1000 runs each.
The performance monist metric is a normalized metric on the interval [0,100] (100 is best).
It is calculated the following way for each run:
(sum over all theories (100 * research-time-monist * objective-admissibility) ) / (#researchers * #ticks * objective-admissibility-best-theory)
where research-time-monist
corresponds to the number of researchers on theory-x summed over all ticks of the run and objective-admissibility-best-theory
is 85 (= full-admissibility) in the case of theory-depth 3.
Relevant code: L1746 https://github.com/g4v4g4i/ArgABM/blob/Testprocedures/admcalc-tests.nls#L1746
This metric tries to capture how well researchers perform during a run given the the admissibility of the objective landscape: If every researcher spent the whole run on the best theory this criterion takes the value 100 (=best value). If there was another theory with admissibility 0 and researchers would spend the whole run there this criterion would take the value 0 (= worst value). This metric is different from the monist-/pluralist-success metric in so far that it looks at what happens during the run while the monist-/pluralist-success metric looks at what happens at the end of - and therefore also after - the run. In this case it turns out that they give us the same information and it is to be expected that they are generally highly correlated but they have different interpretations. (The data for the binary monist-/pluralist-success metric is given below).
Here is another graph from the same data but only measuring runs with 100 researchers: performance-monist for different number of theories and network structures comparing "all-subj-argu" criterion with "all visible". Each bar aggregates runs for 100 scientists with 1000 runs each.
As you can see there is hardly any difference in both cases between "all-subj-argu" & "all visible" wrt this criterion.
This also shows in the summary statistics which can be found here: https://gist.github.com/daimpi/67823cb0e003a5b4cd66e74dc730d633
Looking e.g. at our usual binary monist success metric we find:
Not a statistically significant difference and certainly not practically significant.
The whole dataset can be found at \Dropbox\Agent-based models\Results\Zollman-networks\26 02 2017 prosp-mov-tests2 . Note though that I had to manually fix the results from the performance monist/pluralist metric as this was unfortunately an incorrect formula in the code (dividing by the number of theories which shouldn't happen), so the data on this metric in the .csv
file is skewed and is only un-skewed in the …-corrected.dta
files.
The prospective movement with "all-subj-argu" has been implemented via commit 89464615922892dbdefaf4b77b631c755e00433d
Currently, when agents stand on an argument A that is undefended, they prospectively search for a defense in the child-arguments of A, where prospective search means that they have an insight into the gray defense arrows. If they find such a defense, they move to that argument in order to properly discover the defense of A. The problem with this scenario is that it is highly unlikely that agents will find a defense this way, since a defense could come from an argument that is not one of A's child-arguments. One possible solution could be to allow agents to prospectively search for a defense of A in all discovered arguments of the given theory. This would allow them to find defense more often than they do currently, but it still wouldn't happen too often because a defense might be hidden in some deeper level, which hasn't been discovered yet.
What does everyone think about this?