Open cjayb opened 3 years ago
Or rather create several test datasets based on realistic use cases
I think this would be great to have!! Probably a collaboration with @rythorpe ?
Do we need more cases before we are satisfied that the new API can replace the old?
I would let folks use it for a bit. Would be great if we can use it in a class for teaching or something and see what issues people face. One thing we need to fix definitely is #239 before even talking about any replacements ;-) We can change the default to use the new API but still leave the option for the old behavior for a couple of months at least.
Would it be reasonable for each drive to have a unique seed
I like this option better because the order of adding the drives will not matter then ...
I like the idea of having multiple test datasets. Here's a list of possible ground-truth test datasets based off of drives/biases explored in the tutorials that we could create. Obviously, we don't need to test every combination of drive types so I've marked the examples I think we should use as the minimal number of necessary test datasets with *. Feel free to modify or add to this list.
To what extent is this issue still relevant? Modify the issue title and move to 0.4 if still relevant?
Maybe we can consolidate all of these seed + legacy mode related issues under one issue?
Agree!
This is a follow-up to #221 and related to #233, edited as we move towards a PR.
In order to write good tests for the new external drives API, we should decide on gold standard datasets. The current implementation uses HNN GUI output based a
params
-file that generates a fixed sequence of_ArtificialCell
s (and thus correspondinggid
s; the_legacy_mode
flag inNetwork
is needed to match this behaviour).Some questions to answer here, in no particular order:
name
)? This would allow adding drives in any order, yet retaining the event times. Alternatively, there should be a global seed for eachNetwork
(not the currentgid
-based seeds).hnn-core
can replicate GUI results? Are the current (#221) examples sufficient? Do we need more cases before we are satisfied that the new API can replace the old?dpl.txt
-file generated using all the possible drives (and biases) turned on? Or rather create several test datasets based on realistic use cases such as one forevoked
, one forpoisson
(PING gamma example, includes tonic bias), one forbursty
(possibly a new beta-example), etc. ?