The machine learning community holds "replication challenges" where people get together to replicate published models and their results. For example, see: https://paperswithcode.com/rc2022
These replications are then submitted to re-science-c for review and publication. See: http://rescience.github.io/read/
Some awards are also given out as extra credit to participants.
We thought doing something similar for computational neuroscience models may be very useful to the community:
it'll help replicate/validate existing models
it'll act as an educational project where people can practice the complete modelling pipeline from start to finish
it'll help participants gain credit for their work by submitting to re-science for review
it'll help bring the community together, improve connections: we can reach out to the original modellers and simulation developer to help participants in their replication efforts
it'll help improve our tools: while going through the replication pipeline, we may come across various features/bugs that simulation developers can then address to improve their tools
Hey everyone, I think this is a great idea! However, I see two potential issues:
ML seems to draw more people than Comp Neuro, so we might end up with a low turnout. Then the ratio between the work we put in to set up and the outcome might be low. To mitigate this risk we could approach PIs that are likely to participate beforehand and involve them somehow, making sure they have someone that participates
ReScience has lately been very slow when it comes to reviewing (at least in my experience), so we should be aware of that and communicate with them beforehand
Please provide feedback on this idea in comments.
The machine learning community holds "replication challenges" where people get together to replicate published models and their results. For example, see: https://paperswithcode.com/rc2022
These replications are then submitted to re-science-c for review and publication. See: http://rescience.github.io/read/ Some awards are also given out as extra credit to participants.
We thought doing something similar for computational neuroscience models may be very useful to the community: