osandvold302 / trabecular_bone_unet

Rajapakse/Jones project for reconstructing artificially undersampled kspace MRI scans of hips. Based on work completed by bcjones to reconstruct 2D images.
0 stars 0 forks source link

General Questions #1

Open osandvold302 opened 4 years ago

osandvold302 commented 4 years ago

@brandonclintonjones I'm not going to assign you to this issue, because then I think you'll get updates every time a push occurs? Or if I'm just using this space to think out loud. So I'll try to ping you if there's something specific I'm asking.

Here's a few questions right off the bat on 3D implementation:

Assumptions about generated undersampled kspace

osandvold302 commented 4 years ago

Self thoughts:

_create_kspacemask.py starting place for adapting pdf for anisotropic undersampling -- need to talk to post-doc Brandon referenced

Need to look into more depth about reconstruction with v-net architecture (major differences include 3d convolutional layers). Mostly have found articles detailing segmentation applications

osandvold302 commented 4 years ago

Trajectory:

First adapt functions to handle 3D data (test_train_split, subsampling) before writing new code to actually perform masking and updating existing CNN to handle 3d volumes. Then working with generating the correct pdf for anisotropic data. Combine with CNN to get fully adapted volumetric anisotropic reconstruction. Then play with V-net architecture.

osandvold302 commented 4 years ago

@brandonclintonjones Is the read_dicoms.py missing the save() or savemat() function? Since in all of your other data loading functions, you're reading mat files? Or have these mat files been saved elsewhere?

Also embarrassing, but I didn't write down the ssh commands to access the lab computer 😓 Would you email those to me? Thanks

brandonclintonjones commented 4 years ago

@brandonclintonjones Is the read_dicoms.py missing the save() or savemat() function? Since in all of your other data loading functions, you're reading mat files? Or have these mat files been saved elsewhere?

Also embarrassing, but I didn't write down the ssh commands to access the lab computer 😓 Would you email those to me? Thanks

I hope this is responding to the question. If not you can show me how to reply to your questions later. To answer your question, I don't believe data_loader_class ever actually calls read_dicoms now (or pydicom for that matter). The data was stored as both DICOMs and and as 3D matrices in Matlab (.mat files). I initially started by reading the DICOMs but realized it was faster (both in terms of reading and also less lines of code) to just use Scipy and read the matlab .mat file so I went with it since I was on a time crunch. Either one works. If you don't have the matlab files we can easily get them to you. The one issue we might encounter is with the Radiology computation cluster. Since its a pain to set up virtual environments there, we have to judiciously choose which packages we use. It might just be cleaner to not use Scipy and instead use PyDicom or convert the entire data set to .npy array files and read them with numpy.

Hope that makes sense

And sure I'll message the SSH commands

osandvold302 commented 4 years ago

@brandonclintonjones I'll go look for the mat files once I get onto the lab computer. I think you can just reply in the comments and I'll get the notification since I'm assigned to the issue.

You are correct, data_loader_class never calls read_dicoms. Keep me updated on the cluster environment and if you create a yaml or other config file for conda!

osandvold302 commented 4 years ago

@brandonclintonjones for the conv3d filter, I was thinking I could just implement a 3x3x3 Gauss filter? Since the original intent was to reduce artifacts in a single dimension, a simple Gauss filter would just smooth everything. The other alternative would just be a filter of [1].

Thoughts?

osandvold302 commented 4 years ago

https://github.com/osandvold302/trabecular_bone_unet/blob/e3754d717918bf445d0115ecf51dfb9559f58950/main.py#L571