NeuroDataDesign / seelviz-f16s17

Undergraduate repository for Albert, Luke, and Tony
Apache License 2.0
1 stars 2 forks source link

Deliverables for February 20 #65

Closed toekneesunshine closed 7 years ago

toekneesunshine commented 7 years ago

Committed

status deliverable notes
👍 [DiPy analysis]() on using DiPy for tractography https://github.com/NeuroDataDesign/seelviz/blob/gh-pages/albert/prob/Dipy%2Boverview.ipynb https://github.com/NeuroDataDesign/seelviz/blob/gh-pages/albert/prob/Probability%2BBased.ipynb @alee156
👍 [Pivoted away from Nipype due to invalid file format ]https://github.com/NeuroDataDesign/seelviz/blob/gh-pages/albert/prob/Dipy%2Boverview.ipynb https://github.com/NeuroDataDesign/seelviz/blob/gh-pages/albert/prob/Tensor%2BModel.ipynb @lukezhu1
👍 Analysis/input/output writeup on this CAPTURE paper about tractography with Tony. Specifically, Jon analyzing Diffusion Toolkit/TrackVis inputs, analyzing MATLAB pseudocode @jonl1096
👍 Analysis/input/output writeup on this CAPTURE paper about tractography with Jon. Specifically, Tony analyzing MATLAB pseudocode, sample of how it could be implemented in Python. @toekneesunshine
mlee156 commented 7 years ago

Important notes: As I discussed with Greg, the main challenge seems to be modifying our file inputs to match the format which is required of the various pipe lines we are using. In order for us to truly take advantage of the pipelines we'll need to spend a significant amount of time understanding exactly how to convert our files.

gkiar commented 7 years ago

The major thing I think you are all struggling with is understanding what tractography is and how to do it. You may be using a diffusion processing library, but you certainly do not have diffusion data so should not try to replicate the process (i.e. you cannot compute a gradient table, since you do not have gradient derived data, for instance). What you do need to do is learn the process of tractography, and what types of inputs it takes, and then come up with a plan to preprocess your data to make it a feasible input to the tractography method of your choice. Does that make sense?

mlee156 commented 7 years ago

@gkiar Like you said, since I struggled to understand what tractography is and how to do it since I've never done it before, I concentrated on making sure I understood what Dipy is, what tractography is, and what methods there are to do tractography. That's why the first notebook is concentrating on how dipy prepares data to use in its pipeline, since I had no clue what was going on. The last time I tried to implement an algorithm without understanding what's going on it wasted a really long time, so Jovo always told me to make sure to look into the background and presets of a method before investigating it.

The purpose of the second notebook is me following the tutorial for probabilistic tracing. While, yes, technically that involves a large part of copying and pasting pieces of code from the tutorial, I don't think it's fair to say that's the only thing I did. The main point of the notebook was understanding what outputs and inputs different probabilistic methods use, and how to connect the data collection phase to the visualization phase. The tutorial you mention (http://nipy.org/dipy/examples_built/probabilistic_fiber_tracking.html#example-probabilistic-fiber-tracking) did not have any method of visualizing data, so I had to look at the other tutorials to find out how to. Most importantly I included it to ask you which method you thought was the most important to pursue, because the visualizations show small differences between each of the fiber images.

This week I'm hoping to transform the images that we have into a suitable format for processing, so I was hoping to get advice from you as to which particular method I should look at for a baseline. Since we don't have any metrics of success or examples of what Ailey's data should look like post-processing, it would be very helpful to understand more about exactly what we're trying to look for.

gkiar commented 7 years ago

Ok, I believe you picked an unnecessarily complex tutorial on Dipy. Deterministic tractography is much less complex and quicker, and also much more intuitive. Once you understand that and can make something work we can move on to probabilistic.

I think you should be following exactly what the CAPTURE paper is doing to generate tensor fields, then apply the simplest tractography method you can find on those (for DiPy, I recommend EuDX).

In terms of helping explaining what we're looking for, I'm not sure what you do and don't understand. If you have specific questions please send a g-cal invite to me for a meeting and ask them then. The more specific questions the better - both because I'll be able to give you better answers, and it'll mean you have given specific details thought.

gkiar commented 7 years ago

Also, if you're not sure what to do, I always prefer you asking rather than spending effort in unproductive places. As an example, you reached out to me at the end of the week after doing the diffusion preprocessing in dipy and I told you not to worry about any of that because bvalues and bvectors don't exist in your data and it's not relevant to clarity. Understanding tensors is very important for tracing, but understanding diffusion imaging for your project is not.

So, as we asked you often last semester as well: please ask when you are confused/don't know things. That way it's less likely that (and it's my fault if) you're walking down the wrong path or wasting your energy.