Closed billbrod closed 1 year ago
Yes the checking installation section details how to run a sample analysis and if the "full comparison" option is checked, it will also check if any files are missing between the sample analysis and the example dataset.
Gotcha, so there are no other tests then? I would like to see some automated tests to check at least that the package installs and that a reduced version of the sample analysis (see below) runs. Would that be possible to add?
Additionally:
subject_example_data
exists in the fRAT
directory. If there is, it raises an error (screenshot below, ignore the random white box). In this case, there were just two zip files in that directory (the ones downloaded from the OSF), not other subjects. Docs should probably make clear that nothing else is allowed in that directory while tests are being run.cmp
to test whether the files are identical? It would be good to check that they're identical (or, at least, good enough, if exact reproducibility cannot be guaranteed due to stochasticity or precision issues).Actually, my laptop shut down about 90 minutes into the sample analysis (I had walked away from my computer, so I don't know exactly where it was; not sure it was related to this). Do logs get created somewhere so I can check where it was? Is it possible to resume from where it was?
Actually, my laptop shut down about 90 minutes into the sample analysis (I had walked away from my computer, so I don't know exactly where it was; not sure it was related to this). Do logs get created somewhere so I can check where it was? Is it possible to resume from where it was?
As shown in the flowchart in the paper, there are 3 main steps to the analysis: creation of statistical maps, ROI analysis, and figure creation & statistics. At the start of each step, a file (such as "analysis_log.toml") is created showing the settings used for that step, however there is not a log showing where the program ran up to. Based on what folders have been created, it is possible to see which steps have been created though. Each step can be ran independently in the normal analysis given the correct inputs, however this is not possible in the sample analysis without manually updating the test_config.toml file. The terminal output for the statistics step is already saved to easily find the results for a specific ROI, but I will add onto the future additions page of the documentation to save a log of all terminal output for cases such as this.
- [ ] Is the comparison just that the same files are present? Do you use something like
cmp
to test whether the files are identical? It would be good to check that they're identical (or, at least, good enough, if exact reproducibility cannot be guaranteed due to stochasticity or precision issues).
Originally I was using filecmp
to test whether files are identical, however the byte-by-byte comparison of filecmp
flagged many files as different due to precision issues. So instead I changed to just checking that all files in the example data were also created by the sample analysis.
[ ] What happens if "full comparison" is not checked? It just checks that it runs? (From reading the instructions, it would be easy to miss that "full comparison" should be checked, I almost did).
[ ] That sample analysis takes a fairly large download (2.3GB for input, another 3.3GB for the check) and a while to run (>90 min, used all 12 of my laptop's cores for a large amount of it). Is it possible to include a more reduced version just to check that the install succeeded? For example, only a single subject, 1 or 2 ROIs, etc.
[ ] It would also be nice to give users an expectation of how long / how much space / how many resources it will.
[ ] It's unclear if the user should look at the output of the sample analysis (and if so, what they should look for) or whether the fact that it runs to completion is sufficient. Could you update the documentation to provide a bit of guidance there?
There are two folders on osf as one was just to test that the sample analysis ran to completion, whereas the other also needed to be downloaded to check whether any files were missing. The number of participants and files were chosen to stop issues with the linear mixed model flagging up convergence warnings due to the random effects covariance being singular, with the tradeoff being much larger file sizes. On balance, I agree that it would be less confusing to just have one folder to download, reduce the number of participants & files, and have the file comparison always be ran. Ill just update the documentation to warn that there will be warnings flagged at the statistics step.
I'm just creating a new 1.3 GB example_data folder to replace the two current ones. As there are only 4 files for for each participant, the analysis step will now only use a max of 4 cores (in the real analysis, the user is able to choose the number of cores to use). While it is not possible to reduced the number of ROIs during the analysis step as they are simultaneously analysed, i've reduced the number of ROIs to plot and calculate statistics for in the sample analysis significantly.
Version 1.3.0 has now been released that should address these issues. The sample analysis now takes roughly 15 minutes, and the download is significantly smaller. File comparison is automatically ran, and so there is only one file to download from osf. Instructions given to run the sample analysis have also now been simplified and should be clearer.
The installation tests work for me without a problem! It might be worth adding a note to the section about the FSL requirement to say: "Make sure that $FSLDIR
is set and $FSLDIR/bin
is on your path" (and $FSLOUTPUTTYPE
is set as well?). I didn't add that to my .bashrc
and so had to do that manually. I don't think it's necessary because fRAT throws informative errors if it's an issue, but it might be nice to have it listed in the docs.
I'm going to close this issue now, and open another one to discuss automated tests.
What did you need to add to your bash profile? I've tested fRAT with both Ubuntu 18.04 and 22.04 (using WSL2) and haven't needed to add anything to my path. The fslinstaller set up those for me.
Yes, the fslinstaller will do it for you -- I use a different shell (fish
), and the installer didn't try to modify my files for that. Which is why I don't think it's that important, since most people will just let the installer do its thing and if they're using a non-standard shell, they probably know what they're doing.
Are there any tests beyond the checking installation sections of the documentation? Are there any automated tests?
I understand the package is focused on the GUI, which might be difficult to develop automated tests for, but would it be possible to put together tests to make sure e.g., installation completes and a sample analysis runs?
https://github.com/openjournals/joss-reviews/issues/5200