ANNUBS / annubes

ANNUBeS: training Artificial Neural Networks to Uncover Behavioral Strategies in neuroscience
https://annubs.github.io/annubes/
Apache License 2.0
2 stars 0 forks source link

refactor: add warning message for `max_sequential` #65

Closed gcroci2 closed 7 months ago

gcroci2 commented 7 months ago
gcroci2 commented 7 months ago

This stack of pull requests is managed by Graphite. Learn more about stacking.

Join @gcroci2 and the rest of your teammates on Graphite Graphite

gcroci2 commented 7 months ago

I think this is a good solution while we are uncertain whether the algorithm works.

Maybe it would be an idea to create an issue (or discussion?) to keep track of whether and how regularly it fails. We could print all settings in the warning rather than just the seed, and add a request in the warning to copy/paste it as a comment to that issue.

I remember I commented when the PR was ready about this, but someway the description has been overwritten (I suspect from Graphite). I run many experiments to estimate how many times it fails, and I got the following results.

I generated 100 times trials, using max_sequential of 4 and ntrials of 100. Each time I fed a different random seed to generate_trials. Out of 100 complete trials, only 10 contained exactly only 1 violation in the sequence. Every time, the violation was about having 5 identical modalities in a row instead of 4. I rerun this experiment multiple times, getting very similar and consistent results. Can we consider them an indication of how often the algorithm fails? @DaniBodor

DaniBodor commented 7 months ago

I think this is a good solution while we are uncertain whether the algorithm works. Maybe it would be an idea to create an issue (or discussion?) to keep track of whether and how regularly it fails. We could print all settings in the warning rather than just the seed, and add a request in the warning to copy/paste it as a comment to that issue.

I remember I commented when the PR was ready about this, but someway the description has been overwritten (I suspect from Graphite). I run many experiments to estimate how many times it fails, and I got the following results.

I generated 100 times trials, using max_sequential of 4 and ntrials of 100. Each time I fed a different random seed to generate_trials. Out of 100 complete trials, only 10 contained exactly only 1 violation in the sequence. Every time, the violation was about having 5 identical modalities in a row instead of 4. I rerun this experiment multiple times, getting very similar and consistent results. Can we consider them an indication of how often the algorithm fails? @DaniBodor

I think it does for those settings. Maybe we can add something that if violations occur above some arbitrary cutoff, then we add the comment. Alternatively, I think the code I suggested here always works. I ran that a few hundred (thousand?) times while I was evaluating it and never got a single fail.

I think for now definitely go with this comment and tell me if you think it's a good idea to replace the algorithm with the one I wrote, and then I can probably integrate it in about 30 mins.

sonarcloud[bot] commented 7 months ago

Quality Gate Passed Quality Gate passed

Issues
1 New issue
0 Accepted issues

Measures
0 Security Hotspots
77.8% Coverage on New Code
0.0% Duplication on New Code

See analysis details on SonarCloud

gcroci2 commented 7 months ago

I think this is a good solution while we are uncertain whether the algorithm works. Maybe it would be an idea to create an issue (or discussion?) to keep track of whether and how regularly it fails. We could print all settings in the warning rather than just the seed, and add a request in the warning to copy/paste it as a comment to that issue.

I remember I commented when the PR was ready about this, but someway the description has been overwritten (I suspect from Graphite). I run many experiments to estimate how many times it fails, and I got the following results. I generated 100 times trials, using max_sequential of 4 and ntrials of 100. Each time I fed a different random seed to generate_trials. Out of 100 complete trials, only 10 contained exactly only 1 violation in the sequence. Every time, the violation was about having 5 identical modalities in a row instead of 4. I rerun this experiment multiple times, getting very similar and consistent results. Can we consider them an indication of how often the algorithm fails? @DaniBodor

I think it does for those settings. Maybe we can add something that if violations occur above some arbitrary cutoff, then we add the comment. Alternatively, I think the code I suggested here always works. I ran that a few hundred (thousand?) times while I was evaluating it and never got a single fail.

I think for now definitely go with this comment and tell me if you think it's a good idea to replace the algorithm with the one I wrote, and then I can probably integrate it in about 30 mins.

Sure, go for it :) should we also keep track of the violated sequences and print a warning about them? Not sure if it makes sense if your algorithm never fails, but just in case may be useful @DaniBodor

DaniBodor commented 7 months ago

I think this is a good solution while we are uncertain whether the algorithm works. Maybe it would be an idea to create an issue (or discussion?) to keep track of whether and how regularly it fails. We could print all settings in the warning rather than just the seed, and add a request in the warning to copy/paste it as a comment to that issue.

I remember I commented when the PR was ready about this, but someway the description has been overwritten (I suspect from Graphite). I run many experiments to estimate how many times it fails, and I got the following results. I generated 100 times trials, using max_sequential of 4 and ntrials of 100. Each time I fed a different random seed to generate_trials. Out of 100 complete trials, only 10 contained exactly only 1 violation in the sequence. Every time, the violation was about having 5 identical modalities in a row instead of 4. I rerun this experiment multiple times, getting very similar and consistent results. Can we consider them an indication of how often the algorithm fails? @DaniBodor

I think it does for those settings. Maybe we can add something that if violations occur above some arbitrary cutoff, then we add the comment. Alternatively, I think the code I suggested here always works. I ran that a few hundred (thousand?) times while I was evaluating it and never got a single fail. I think for now definitely go with this comment and tell me if you think it's a good idea to replace the algorithm with the one I wrote, and then I can probably integrate it in about 30 mins.

Sure, go for it :) should we also keep track of the violated sequences and print a warning about them? Not sure if it makes sense if your algorithm never fails, but just in case may be useful @DaniBodor

OK, will do. Until then, I think merge this one, and I will add on top.