nipy / nibabel

Python package to access a cacophony of neuro-imaging file formats
http://nipy.org/nibabel/
Other
646 stars 257 forks source link

ENH: Add GIFTI to the list of known imageclasses to ease loading with .load() #316

Closed bcipolli closed 8 years ago

bcipolli commented 9 years ago

GIFTI and CIFTI are going to be more important as more people use the Human Connectome Project (HCP) data (and as those data continue to evolve).

It would be great if GIFTI were integrated into the load/save framework. It was mentioned in https://github.com/nipy/nibabel/pull/95 that this be done.

I have time today at the OHBM hackathon to do this; @matthew-brett what do you think? I'm new to GIFTI, so I may not be realizing some issue with doing this.

This may also make some PySurfer changes I am piloting easier; see https://github.com/nipy/PySurfer/issues/134

matthew-brett commented 9 years ago

I don't know GIFTI either I'm afraid, and as you may have seen, that code hasn't had much care since Stefan Gerhard stopped working on it.

That makes me a bad person to design the API for this stuff. I will do my best to give sensible comments though.

Would you be interested in taking charge of the GIFTI / CIFTI part of nibabel?

bcipolli commented 9 years ago

@matthew-brett Thanks; let me talk with @satra today about GIFTI/CIFTI formats. When I have a better idea of what they are and how we're going to work with them, I'd be glad to consider it.

In the meantime, I'll put something together for the OHBM hackathon, and we can figure out afterwards if it's in the right direction.

Thanks as always!

bcipolli commented 9 years ago

@matthew-brett I think I can manage this. Here's what I'm thinking:

matthew-brett commented 9 years ago

Thanks - that would be a big contribution.

Maybe we could kick off with a little sketch of what the Python API would look like, with some examples of (proposed) use?

Are there a set of standard CIFTI files we could test against?

http://nipy.org/nibabel/devel/add_test_data.html

They could maybe go in a data repository if the license is nasty (I think it is for the HCP data):

http://nipy.org/nibabel/devel/add_test_data.html#adding-as-a-submodule-to-nibabel-data

It would be good to have a more freely licensed small example to add to the nibabel source repo.

bcipolli commented 9 years ago

This was easy to do, but as implied above--to add these to the load/save pathways, we should at least have reasonable test coverage.

I believe a full GIFTI API can be completed separately from this, as adding an API is backwards-compatible with simply loading/saving and offering access to the header structure.

@matthew-brett does that sound reasonable? It will help break things into more reasonable chunks for me, and allow me to more smoothly co-develop this and the nidata side of getting relevant files.

matthew-brett commented 9 years ago

Absolutely OK with me - as long as we keep backwards compatibility to a reasonable degree, please do choose the path that works best for you.

bcipolli commented 9 years ago

So, the issue I'm having here is that load/save assumes some abstract Image/Header object interface that is defined within SpatialImage/Header. However, those two classes are not appropriate for GiftiImage.

The best thing in my eyes is to separate this interface related to generic file I/O into a class that both SpatialImage/Header and GiftiImage inherit from.

My temptation was to call this Image and Header, and to migrate Header to SpatialHeader, but that seems too dangerous. Perhaps FileBasedImage and FileBasedHeader?

bcipolli commented 9 years ago

I have something working pretty well, using this type of logic. Even if we toss it, it was useful to get used to the innards of the system.

I refactored code to BaseHeader/BaseImage and FileBasedHeader/FileBasedImage, and used those for GiftiHeader/GiftiImage. Once I did that, the rest came pretty smoothly.

First-pass refactor here: https://github.com/nipy/nibabel/commit/d4e0496b53bbf8dc90c91eba902441fb695c6e37

All this code depends on the reload branch, so it felt odd to do a pull request until that's resolved.