Closed TomMaullin closed 5 years ago
Hello @TomMaullin! Thanks for updating the PR.
Cheers ! There are no PEP8 issues in this Pull Request. :beers:
Hi @cmaumet ,
This PR is now ready for review. I shall upload an example dataset to a PR on the NIDMResults-Examples repository this afternoon in order to show it now functions correctly and comment on this PR again when this is done.
The Travis tests pass but the PEP8 bot still has some complaints. These are not with code I have touched and I am wary of changing anything in these lines as I don't have full knowledge of what is going on in the rest of the code.
Hi @cmaumet ,
Just to confirm, an example dataset for testing this PR is now available in the below PR:
https://github.com/incf-nidash/nidmresults-examples/pull/101
Hi @TomMaullin! Thanks a lot for this update. Can you provide a short description (and possibly an itemized list) of the changes included in this PR? Thanks!
Hi @cmaumet , yes sure!
The issue this PR aims to address is issue #118. It's probably easiest to explain the cause of issue #118 by first explaining what the master branch currently does when recording contrasts:
ExcursionSet_T002
it records the variable con_num
as 2).design.fsf
for the contrast's name, using the variable con_num
to identify it (see line 249 of fsl_exporter.py
) and records it as self.contrast_names_by_num[con_num]
.design.fsf
for the contrast's weight matrix, again using con_num
to identify the contrast.This works very well for T contrasts. The problem with this is that it doesn't account for F contrasts. F contrasts are not recorded in the same way in the design.fsf
. Say, there are three excursion sets ExcursionSet_T001
, ExcursionSet_T002
and ExcursionSet_F001
.
For T001, the following happens in the code;
con_num
will be assigned to 1.con_num=1
was called, say T contrast 1
, by searching the design.fsf
.con_num=1
was, say [1 0]
. For T002;
con_num
will be assigned to 2.con_num=2
was called, say T contrast 2
, by searching the design.fsf
.con_num=2
was, say [0 1]
. For F001;
con_num
will be assigned to 1.con_num=1
was called. This will again be T contrast 1
as searching the design.fsf
in the way the code currently does only gives T contrasts.con_num=1
was. Again this will be [1 0]
, which is the contrast weight matrix of the T contrast and probably not the contrast weight matrix of the F which we are interested in. The F contrasts just end up being recorded as exact copies of the T contrasts. I.e. the first F contrast is an exact duplicate of the first T contrast, the second F contrast is an exact duplicate of the second T contrast and so on. This is clearly incorrect behavior.
This is actually happens for the test dataset fsl_con_f
. You will see that, when exported, the F contrast recorded for this pack will be exact duplicate of the T contrast. The reason this probably hasn't been noticed before is that it just so happens that in this dataset, the F contrast, it just so happened, actually was a duplicate of the T contrast!
The current code does not record F tests correctly. This can be easily verified by running a case with more F tests than T tests. When there is no T contrast with the same con_num
as the F contrast the code errors and no export is created.
It is also worth noting though that as the F contrast weight matrix was previously not recorded, degrees of freedom for F tests could not be calculated also (as the rank of the contrast weight matrix could not be correctly recorded).
To rectify this I have changed the following:
I hope this explains these changes well. Please let me know if you have any further questions/feedback!
Hi @TomMaullin! Thanks for the detailed explanation. This looks like an important bug fix. Thanks for tracking this down and submitting this update! Can you rerun the travis testing to make sure that they still pass now that we have updated nidmresults-examples to include your multi-F-contrast example?
(Note: in the future, if there is another instance where you want to test against a branch of the nidmresults-examples repository you can update the NIDM_EX
variable from the .travis.yml
).
Hi @cmaumet ,
I have rerun the travis tests and encountered a slight issue. The fsl_con_f_multiple
dataset was created using FSL 5.0.10 but I hadn't realized until now that the travis tests are running FSL 5.0.8. For this reason, when the exporter tests are run on fsl_con_f_multiple_130
they fail but when they are run on a computer with FSL 5.0.10 installed they pass.
To fix this I am going to do the following:
fsl_con_f_multiple_130
on the NIDMResults-examples
repository with one obtained from running FSL 5.0.8 on the same design.fsf
.Upgrade to FSL 5.0.10 #103
fsl_5.0.10
branch containing the FSL 5.0.10 version of fsl_con_f_multiple
. Is this plan okay by you? Do you have any alternative suggestions?
Hi @TomMaullin! Thanks for looking into this!
For this reason, when the exporter tests are run on fsl_con_f_multiple_130 they fail but when they are run on a computer with FSL 5.0.10 installed they pass.
Can you clarify what happens when the tests fail? Which error message?
Is this plan okay by you? Do you have any alternative suggestions?
Your plan sounds good! We can then deal with integration with FSL 5.0.10 in another PR.
Discussed with @TomMaullin today:
The tests here are failing because the fsl_con_f_multiple example was created with FSL 5.0.10 and the tests are run using FSL 5.0.8 (from neurodebiam). When the cluster command is called (based on the log created with 5.0.10) they fail with FSL 5.0.8.
It's seems fragile to rely on the cluster
command inside the FSL exporter (using the command extracted from the log) as current version of FSL might be different with the one used to do the analysis.
Our solution is to remove call to cluster
cf #114. #114 will have to be solved before we go on with this.
Hi @cmaumet ,
This PR has now been updated to work with the 2.00 update changes you made recently. However, I am struggling to address the issue created by the call to cluster as the fsl_con_f_multiple
example is no longer in the nidmresults-examples repository (and the tests take too much memory to run on my laptop at the hackathon).
I was under the impression that this test was going to be temporarily removed from the test script (e.g. maybe by downloading nidmresults-examples
and then deleting the folder fsl_con_f_multiple
and adding a patch before running the tests in the .travis.yml
) but it is no longer in the examples repository. I can't quite remember fully... I know at one point we did talk about moving it... does it now live on a different branch? (i.e. should I now point the travis tests somewhere else?). Or has the entire examples repository been updated with 2.0 as well?
@TomMaullin: Now that we have merged #135 and #134. Can you rebase on master
and put back fsl_con_f_multiple
test (i.e. revert https://github.com/incf-nidash/nidmresults-fsl/pull/124/commits/a3a7ade329eef16a5f0ef0d6d6c2e5479e199132) to see if the tests now pass?
Hi @cmaumet ,
Apologies for the delay in fixing this PR, the hardcoded column numbers added in some of our last commits aren't the same for the F contrast peak and column tables so I have had to spend a bit of time fixing the code to automatically look for column headers to work out which column is which. I can now confirm however that all tests pass (including the f_con_multiple_130
test), there are no PEP8 issues and the code has been updated with the changes made in the master branch. This PR is now ready for review!
Please let me know what you think/if you have any feedback!
The aim of this PR is to fix issue #118 by displaying F contrasts correctly.