neurosynth / nsviewer

Neurosynth viewer
28 stars 22 forks source link

Nifti size in nsviewer #19

Open rdevon opened 9 years ago

rdevon commented 9 years ago

Does nsviewer only support niftis of size 91 109 91? If so, is there chance that different image sizes will be supported in the future?

tyarkoni commented 9 years ago

Unfortunately, yes. It's hard-coded to only handle images nominally in MNI152 2mm space--largely because when I first wrote it, it was intended strictly for use on neurosynth.org rather than as a separate library. This is a major limitation and really needs to be addressed, but it's not a trivial fix, and I'm not sure when I'll be able to find time for it.

I should mention though that if anyone happens to read this and is interested in potentially working on this, I may have some funds to pay for the effort (please contact me directly if interested).

rdevon commented 9 years ago

You should advertise to some of the developers over at nipy: https://github.com/nipy/nipy/ Someone over there would be able to do it.

machow commented 9 years ago

Sorry--I know this is a bit out of date, but it looks like the only portions that need changing are the hard-coded transformation matrices in models.coffee.

Does that seem correct, or is there another portion that would need to be changed as well?

tyarkoni commented 9 years ago

Thanks for your interest, @machow!

Unfortunately it's not quite that simple (though not much more complicated). The dimensions of the grid are currently also hard-coded; see https://github.com/neurosynth/nsviewer/blob/master/src/views.coffee#L350. So we'd also need to get the right dimensions from the image header. I also recall there potentially being some issues with the XTK nifti reader (though only under limited conditions), so it's possibly you might run into edge cases where the image header and/or data aren't loaded properly. Of course, those currently wouldn't load properly anyway, so it's not like we'd lose anything.

Let me know if you decide to take this on and I'll be happy to help as time allows (which unfortunately is not often right now). It would be great to have this finally fixed!

machow commented 9 years ago

Sorry for the delay--I started working on it today, and it looks like it should be pretty quick. Do you have any sense for where it should detect a 2mm vs 3mm image? Otherwise, do you think it would be simple to pull the dims out of the file header? I haven't looked too closely at where it loads the data, so I can do some digging. However, I noticed the transformation matrices were only doing scaling (+reflecting) / transformation. Once the dim information and voxel size is set correctly, the transformation matrices can could be generated from them.

tyarkoni commented 9 years ago

Right now it's hard-coded for 2 mm, but you can get the voxel dims--or anything else you need--out of the header. The catch is that I don't currently store the header after the xtk volume is initialized. If you look at https://github.com/neurosynth/nsviewer/blob/master/src/app.coffee#L134, right now the volume is thrown away after extracting the image data and dimensions (which are saved in the data object that gets passed to the viewer).

The other complication is that the xtk library is minified, and the header fields are consequently cryptically named. I believe in the version that's currently included in this repo, the 'Fb' property of the volume contains the voxel dimensions. But we don't want to hardcode a reference to that property, because it will probably change every time the xtk library is updated. The underlying issue is that in the xtk code, the relevant properties that contain the transformation info we want are meant to be private, so their names aren't retained when minified. It's possible that the info is also contained in one of the other properties that's not minified, but if so, I haven't been able to find it. (To top it off, the included xtk lib now appears to be outdated, and it looks like they're storing the transformations differently in the current version. I'll probably update that shortly, as it doesn't look like the new build breaks anything.)

Anyway, I'm not sure what the best way to handle this is. We could always patch the xtk library itself to save the transformation matrix and/or voxel dimensions in named fields, which could then be included in the data object that's passed to the Image constructor. I'm happy to help with that part if you like. That said, if there's a way to locate this info in the existing volume, that would save some trouble (and prevent having to patch future versions of the xtk lib). Let me know if you have other ideas.

In the long term, it would probably make sense to switch to a different library for reading in nifti volumes (e.g., I think Papaya's i/o functionality is less intertwined with other volume rendering code--at the moment, I have to create a fake xtk renderer just to get the image data out).

Thanks again for working on this!

tyarkoni commented 9 years ago

For what it's worth, I think once we solve the xtk issue, we should be able to just directly pull out the transforms and use those. I believe the xtk volume stores bidirectional transforms (i.e., image-to-world and world-to-image). There will still be a need to dynamically generate another set of transforms for the canvas (since users can create canvases of different size), but that's a simpler problem.

tyarkoni commented 9 years ago

Note that I just pushed some changes that had been sitting around uncommitted for a while, so you probably want to pull before making any changes. These are mostly UI tweaks that shouldn't break anything. I also updated the included xtk library to the current build. There are _RASToIJK and _IJKToRAS properties that contain the full transformation matrices; in the current minified code these should show up as v.Cc and v.Qh (in the xtk volume object) if you want to grab them from there--though again, we probably don't want to hardcode references to these property names.

machow commented 9 years ago

Okay--it looks like XTK uses google closure, and is doing minification at a level called advanced optimization. I tested building with less intense minification, and it looks like variable names for the transformation matrices are untouched. However, xtk.js would go from ~200 kb to 500 kb, which is fairly big bump.

Switching to papaya seems promising, but I couldn't really get a feel for how it works. It might be worth raising an issue with them to see if they'll put up an example of how to load volume data without using the ui.

I pushed the new xtk to this branch.

What do you think? This should solve the problem, but the file size is a bit hefty, and papaya definitely seems better maintained at the moment. The current difficulty with papaya is that AFAIK it doesn't have any documentation for manually opening images.

tyarkoni commented 9 years ago

Well, it's a library that most people are going to be using to load multiple 1MB+ images, so I think in the grand scheme of things 500 kb isn't a big deal. If you want to open a PR, I'll merge it, and then hopefully grabbing the transforms will be straightforward. I agree that the Papaya I/O is a bit opaque at the moment, so I'm fine sticking with xtk for now.

gamaliz commented 9 years ago

This looks very promissing. Thanks for the hard work. I am also interested in been able to dynamically load images of random sizes :)

tyarkoni commented 9 years ago

Okay, I think we're ready to fix this now. :)

See the last couple of closed issues for a summary of the recent changes. The viewer is now much more performant, and also makes use of ndarray, so it should be much easier to do various matrix operations. See ndarray-ops in particular. @machow, if you're feeling ambitious, feel free to swap out the matrix operations in Transform with ndarray-ops operations. Otherwise you can just extract the matrices to the current array-of-array format, and I'll rewrite the matrix code throughout the library later.

I added references for you in d6651e589f24dba317c490dd9b3327bc2db07db1 to the transforms. They're now stored in each Image inside the transforms property. But I think we should probably process the 1D arrays extracted from xtk in the Image constructor, and save those in a form that's ready to pass to the Transform calls. Then we can modify atlasToImage() and imageToAtlas() to take a second argument (either the whole Image object, or just the desired transformation pulled out of it). And that's pretty much it! :)

Oh, except I guess we'll still need to make some decisions about how to map the world space to the canvas. I like the idea of deferring the generation of those transformations until the user loads the first image. At that point, we should probably call a method that's bound to the Viewer class itself (or maybe ViewerSettings), which computes and internally store the transforms by just mapping the dimensions of the available canvas linearly onto the bounding box of the linear volume.

machow commented 9 years ago

Looks good! I can take a stab at this next week.

Deferring seems like a good idea. Originally, I thought LayersList might be a good candidate for holding the atlastToCanvas transformation matrix, but considering it's pretty much only useful for rendering, I think ViewSettings is a good place for it. On the other hand, if deferral of loading, updating, etc.. start imposing a fair amount of logic, then it seems like LayersList might be a good place to stick it.

tyarkoni commented 9 years ago

Great! Feel free to put the logic wherever you think makes sense; I don't have strong feelings about it. :)