Open ljchang opened 8 years ago
It's a bug indeed. This is related to nltools issue: https://github.com/ljchang/neurolearn/issues/43
fixed the bug with nltools
Thanks! I've upgraded nltools and now it works.
Hi Anton, I tested this on both the dev and production servers, and for some reason on the production server the tests don't seem to be working. The r values are 0, where they seem to be working correctly on the dev server. Did I test it too soon?
No, this is not OK and you didn't test too soon.
The correlation values for your test in production are too small and rounded to 0:
r: 0.002890887232158625 id: 3393 name: pain ALE
r: -0.0011885438298766988 id: 3391 name:cognitive control ALE
r: 0.001298994551741561 id: 3392 name: negative affect ALE
Where the same test on dev gives following correlations:
r: 0.21919474646246498 id: 3393 name: pain ALE
r: 0.1107935112067798 id: 3391 name: cognitive control ALE
r: 0.12110203389266345 id: 3392 name: negative affect ALE
I'll track the source of this issue.
Not sure what's going on. It's pretty inconsistent. When I test it on some images it seems to work and on others it is zero. Let me know if you think it might have something to do with the similarity function and I can look into it more.
@burnash Have you noticed this happening on any of your tests? it seems like it is only happening for me on the shackman meta-analysis images at the moment.
No, I haven't noticed it yet, but now I'm trying to reproduce it locally with your input data
@ljchang very strange results so far. I can't reproduce zero correlation on my laptop. It gives me non zero correlation for pain ridge model tested on shackman meta-analysis. While I get zero for the same data in both production and dev. (the same nltools version in every environment.)
@burnash That is very strange. Also, I looked at older tests using that same dataset and it looked like it was working properly. It's the only dataset that seems to be exhibiting this behavior at the moment. It's also the one I use the most frequently to test the models.
I wonder if the data got corrupted on the server side cache. Is it possible to empty all of the neurovault data and see if it works if we redownload it? we could try it on the dev server first. Not sure if you've already tried this.
@ljchang that's right, I already emptied the cache, but with no result. I also noticed that prior tests (more than a month ago) didn't have zero correlation.
@burnash: Ok I think I figured it out. It looks like those particular images were updated on 3/27/15 by the authors and now they don't load properly using nltools for some strange reason. That explains why it used to work, but doesn't anymore and also why it is selective to this particular dataset. http://neurovault.org/collections/474/
Now on to the next problem - I have no idea why it isn't loading correctly. It works fine with nibabel.load(), but not Brain_Data()
dat = Brain_Data(glob.glob(os.path.join(base_dir,'*updated_*.nii.gz'))) dat.plot()
@ljchang that's a really great finding! I did notice the word "updated" when I ran the test locally, but I paid no attention to it. But one strange thing remains: it still works fine on my laptop while fails on remote servers. I guess it could be related to nltools dependencies. (those without strict version requirement) I'm going to check if I have older nibabel locally.
Let me know what you find with the dependencies. I have a feeling it is going to be related to nilearn as that is the one that transforms the data from nibabel into Brain_Data()'s representation.
Here is the one on my laptop nilearn==0.2.5
This will be useful as it may help me figure out why it isn't loading correctly on my laptop if does turn out to be related to the version.
It's nilearn==0.1.3 on my laptop and nilearn==0.2.3 on dev and production
Ok, looks like that is probably the reason for the conflicting results, now we just need to figure out why it's not working on this dataset with the new version.
Should we go ahead with inviting the beta testers? I probably won't have time to figure out this problem for at least 2 weeks.
If nilearn issue is not blocking us I think so. The other interface improvement in the pipeline is a selection of values to classify. I think this is also not a blocker because I expect other more important issues to be raised during beta testing.
me too. I'll go ahead and send an email out today
Great, thank you!
'Brain_Data' object has no attribute 'data'