Closed TomMaullin closed 5 years ago
Hello @TomMaullin! Thanks for updating the PR.
nidmfsl/fsl_exporter/fsl_exporter.py
, following are the PEP8 issues :Line 1249:80: E501 line too long (88 > 79 characters) Line 1259:80: E501 line too long (249 > 79 characters) Line 1261:80: E501 line too long (143 > 79 characters) Line 1262:31: E231 missing whitespace after ',' Line 1262:74: E231 missing whitespace after ',' Line 1263:31: E231 missing whitespace after ',' Line 1263:80: E501 line too long (123 > 79 characters) Line 1263:116: E231 missing whitespace after ',' Line 1264:31: E231 missing whitespace after ',' Line 1264:76: E231 missing whitespace after ',' Line 1264:80: E501 line too long (83 > 79 characters) Line 1267:80: E501 line too long (114 > 79 characters) Line 1268:80: E501 line too long (145 > 79 characters) Line 1311:37: W291 trailing whitespace Line 1318:20: E714 test for object identity should be 'is not' Line 1321:80: E501 line too long (88 > 79 characters) Line 1333:31: E231 missing whitespace after ',' Line 1333:74: E231 missing whitespace after ',' Line 1336:80: E501 line too long (110 > 79 characters)
As discussed by email, the voxel-space to subject-space transformation currently does not set up the origin. To do so, the code will have to be updated like in this snippet (from http://nipy.org/nibabel/coordinate_systems.html#applying-the-affine and #114):
>>> import nibabel as nib
>>> import numpy as np
>>> img = nib.load('filtered_func_data.nii.gz')
>>> voxToWorld = img.affine
>>> M = voxToWorld[:3, :3]
>>> abc = voxToWorld[:3, 3]
>>> M.dot([19, 40, 18]) + abc
where [19,40,18] are the voxel-space coordinates we want to transform.
(@TomMaullin: I've a slightly updated your comment above)
Hi @cmaumet ,
This PR has now been updated with the intercept fix you found and the tests all pass!!... ish. The only errors can be seen in the travis log to be rounding errors due in the coordinates. Should we merge it or did you also want to address the peaks table in this PR? Perhaps should we update the tests as well?
(@cmaumet ah sorry yes! I should have put that)
This PR has now been updated with the intercept fix you found and the tests all pass!!... ish. The only errors can be seen in the travis log to be rounding errors due in the coordinates.
Cool! We need to find a way to amend the tests so that they pass before we merge. What would you suggest? Note: can you first have a look at my comment above and see if this helps?
did you also want to address the peaks table in this PR?
Maybe we can add the peak table in here as well since the piece of code we removed was also producing this file? How does this sound to you?
Cool! We need to find a way to amend the tests so that they pass before we merge. What would you suggest? Note: can you first have a look at my comment above and see if this helps?
Sure I will address your above comment first thing tomorrow morning! Once this has been addressed, if the tests are still failing, the simplest fix to me would seem to be to either just manually change the coordinates in the ground truth file for the tests that are failing (e.g. change the coordinates here ), so long as we are agreed the errors are only caused by rounding errors and not something more serious, or replace the entire ground truth files with an updated version. I am happy to do either - replacing the whole file makes more sense to me but just changing the text might be a lot easier on my VM!
Maybe we can add the peak table in here as well since the piece of code we removed was also producing this file? How does this sound to you?
Ah okay, I didn't realize we had deleted the piece of code that originally produced this file - where was it originally produced?
either just manually change the coordinates in the ground truth file for the tests that are failing (e.g. change the coordinates here ), so long as we are agreed the errors are only caused by rounding errors and not something more serious, or replace the entire ground truth files with an updated version. I am happy to do either - replacing the whole file makes more sense to me but just changing the text might be a lot easier on my VM!
If the tests still fail, +1 to modify only the coordinates. The ground truth files were generated semi-manually and replacing them completely could inadvertently introduce changes that we don't really want.
Ah okay, I didn't realize we had deleted the piece of code that originally produced this file - where was it originally produced?
The call to FSL's cluster
(cf. here) was actually generating both the cluster and the peak table.
@TomMaullin: if you can rebase on master
, some of the PEP8 issues above will disappear as thay have been addressed in #134.
If the tests still fail, +1 to modify only the coordinates. The ground truth files were generated semi-manually and replacing them completely could inadvertently introduce changes that we don't really want.
Ah okay, yes sure! I will do this and make a PR to nidmresults-examples
!
The call to FSL's
cluster
(cf. here) was actually generating both the cluster and the peak table.
Ah right I see... so would the output for the voxel table in first level clusterwise analysis also only be in voxel space at the moment?
@TomMaullin: if you can rebase on master, some of the PEP8 issues above will disappear as thay have been addressed in #134.
Ah okay, yes sure. I have merged master but not rebased yet. I can't remember... when I rebase do we lose this conversation on the PR?
Hi @cmaumet ,
In regards to the currently missing peak mm table I am a little confused... the datapack fsl_full_examples001
is a first level clusterwise analysis, which is the use case we have edited here, and it's ground truth file clearly contains peak mm coordinates (see here for example).
However, following our edits here, in the Travis logs we are getting no errors saying that these coordinates are missing (the only errors are from the cluster coordinates that have changed - see for example here). If we have removed the only code that creates the peak subject space mm table, I am confused as to how the Travis tests haven't reported this error. Could it be that the voxel mm table should already exist anyway?
Ah okay, yes sure! I will do this and make a PR to nidmresults-examples!
Great! Can you make sure that we always have the same number of digits for the coordinates? Currently we have mixed numbers, e.g. [-56.35, -41.3, 16.45]
.
Ah right I see... so would the output for the voxel table in first level clusterwise analysis also only be in voxel space at the moment?
Yes exactly. Similarly to what we had for clusters, we have the peak table in standardised space and in voxel space but not in subject space: cf. https://github.com/incf-nidash/nidmresults-examples/tree/master/fsl_thr_clustfwep05.
Ah okay, yes sure. I have merged master but not rebased yet. I can't remember... when I rebase do we lose this conversation on the PR?
Okay, a merge is fine. We don't loose discussion when rebasing but the commits are modified so comments on the code (e.g. a code review) can be hidden. But not need to do this anymore since you already merged master in.
If we have removed the only code that creates the peak subject space mm table, I am confused as to how the Travis tests haven't reported this error. Could it be that the voxel mm table should already exist anyway?
Is it possible that you still have lmax_zstat1_sub.txt
from a previous run of the tests? Can you check in test/data/nidmresults-examples/fsl_full_example001
?
Great! Can you make sure that we always have the same number of digits for the coordinates? Currently we have mixed numbers, e.g. [-56.35, -41.3, 16.45].
Ah yes sure! I'll do this.
Is it possible that you still have lmax_zstat1_sub.txt from a previous run of the tests? Can you check in test/data/nidmresults-examples/fsl_full_example001?
This is happening on Travis, so there can't be a file from a previous run unless one has accidentally been included in the nidmresults-examples
repository.
Actually, nevermind I have been able to get a Peak
coordinate error locally... not sure why this isn't happening on Travis.
This is happening on Travis, so there can't be a file from a previous run unless one has accidentally been included in the nidmresults-examples repository.
It can also happen on Travis because to avoid downloading the data again and again, we are using a cache to save data from one run to the next (cf. here).
It can also happen on Travis because to avoid downloading the data again and again, we are using a cache to save data from one run to the next (cf. here).
Oh wow! I didn't realise you could do this - does this reduce the git lfs usage then?
Oh wow! I didn't realise you could do this - does this reduce the git lfs usage then?
Yes, exactly. In the first weeks we were using these test data on Travis CI we exploded our git-lfs quota just by submitting a couple of pull requests...
Yes, exactly. In the first weeks we were using these test data on Travis CI we exploded our git-lfs quota just by submitting a couple of pull requests...
Ah thank you! I didn't realize you could do this! I have the same problem with another repo!
Hi @cmaumet ,
I have updated the code to recreate the voxel table and I have updated the nidmresults-examples
repository here. I think the tests now all pass (I tried on a clean setup) but I am still wary. Let me know if everything seems correct to you.
@TomMaullin: Thanks for all this! Can you update the .travis.yml
to run against your updated branch of nidmresults-examples
? See https://github.com/incf-nidash/nidmresults-fsl/blob/master/.travis.yml#L34 (you will also have to add a remote linked to your fork of nidmresults-examples
).
This way we can check if the tests pass of Travis or not.
Hi @cmaumet ,
I am having some trouble updating the tests to use my local branch instead of the incf-nidash/master
branch. I am unsure how the caching on this repo's travis tests work... do my edits on the .travis.yml
seem correct to you?
Hi @TomMaullin. That's a good start! You also need to set the NIDM_EX_BRANCH
variable in https://github.com/incf-nidash/nidmresults-fsl/blob/master/.travis.yml#L11: NIDM_EX_BRANCH=CoordinateUpdate
and to change origin
into TomMaullin
in https://github.com/TomMaullin/nidmresults-fsl/blob/510ea08d10fa65d183d7680d33bd2e8550c6c0bd/.travis.yml#L36 and https://github.com/TomMaullin/nidmresults-fsl/blob/510ea08d10fa65d183d7680d33bd2e8550c6c0bd/.travis.yml#L39. Best thing would be to create a new variable NIDM_EX_REMOTE
and set it to TomMaullin
(so that later on we can easily set it back to origin
.
To give you a bit of context on this. When we started to use git-lfs, it was not super robust and we were often getting timeout errors. This means that we had to try and download the data multiple times before it finally worked. This is what the travis_retry git reset --hard origin/${NIDM_EX_BRANCH}
are doing. We might be able to remove those but let's keep that for another PR. Let me know if that makes more sense.
Hi @cmaumet ,
Ah okay I see now yes! Thank you! The tests now pass - ready to merge when you are!
Cool! Now that we have confirmation that the tests pass, let's merge step-by-step:
.travis.yml
in the current PR to run against master of nidmresults-examples.This will ensure that we are in a stable state when we merge.
Okay, sure! I am unsure though, are you merging #105 on nidmresults-examples
or am I?
I've just reviewed https://github.com/incf-nidash/nidmresults-examples/pull/105. When we can, it's safer to work in pair: 1 submitter / 1 reviewer.
Hi @cmaumet , This PR has now been updated to work with the 3 significant figures in nidmresults-examples
. After the PR on nidmresults-examples
has been merged I am happy to merge this when you are!
Hi @TomMaullin! That's great! Thank you! I've just merged the companion PR: incf-nidash/nidmresults-examples#105.
Can you update the current .travis.yml
to run against master of nidmresults-examples. Once this is done, let's wait for the tests to complete and happy to merge!
Hi @cmaumet ,
This has now been done and the tests pass! I shall now merge!
This PR removes the call to cluster made for a first level clusterwise analysis and replaces it with python/nibabel functions.
This update was done in pair programming with @cmaumet.