Yes they do exist. Currently our report coverage from codecov is a whopping 5%. I'd like to increase this. Let's aim for increasing the coverage by 8.2%!
Since AFNI is blend of several programming languages, testing the various programs as executables seems like a sensible way to expand the tests to give some basic guarantee of stable functionality across the software suite. I have written some code to facilitate this and would now love some help increasing the test coverage/improving the approach. I have added a few issues to the AFNI github repo (with the "Testing") as some example hack items in addition to specifically adding more tests.
Skills required to participate
Lots of different parts of this so all are welcome.
Beginners: For beginners there is some opportunies to help write documentation and learn a little about software testing, continuous integration, containerization, and the AFNI neuroimaging analysis software suite in general.
Neuroimaging scientists With some basic python/bash skills a beginner will be able to add tests for any tool. Although not required, the more domain specific knowledge here the better. Having intuitions for what each tool "should" be doing without reading through docs etc. will speed up the process.
Pythonistas Some help would be greatly appreciated from more advance python developers (even just dropping by to give some advice). There are some details to the execution of the tests - for example datalad is used to store saved sample output data - that could do with testing to help with the endeavour. Likely such tests would require experience with mocking and comfort with git/git-annex/datalad.
Tests for AFNI code?
Project Description
Yes they do exist. Currently our report coverage from codecov is a whopping 5%. I'd like to increase this. Let's aim for increasing the coverage by 8.2%!
Since AFNI is blend of several programming languages, testing the various programs as executables seems like a sensible way to expand the tests to give some basic guarantee of stable functionality across the software suite. I have written some code to facilitate this and would now love some help increasing the test coverage/improving the approach. I have added a few issues to the AFNI github repo (with the "Testing") as some example hack items in addition to specifically adding more tests.
Skills required to participate
Lots of different parts of this so all are welcome.
Beginners: For beginners there is some opportunies to help write documentation and learn a little about software testing, continuous integration, containerization, and the AFNI neuroimaging analysis software suite in general.
Neuroimaging scientists With some basic python/bash skills a beginner will be able to add tests for any tool. Although not required, the more domain specific knowledge here the better. Having intuitions for what each tool "should" be doing without reading through docs etc. will speed up the process.
Pythonistas Some help would be greatly appreciated from more advance python developers (even just dropping by to give some advice). There are some details to the execution of the tests - for example datalad is used to store saved sample output data - that could do with testing to help with the endeavour. Likely such tests would require experience with mocking and comfort with git/git-annex/datalad.
Integration
Milestones:
Preparation material
An overview of adding the tests is here
Link to your GitHub repo
The afni codebase
Communication
The matter most channel Also use the testing label in the github issues