OSIPI / TF2.4_IVIM-MRI_CodeCollection

OSIPI TF2.4: IVIM MRI code collection
Apache License 2.0
9 stars 27 forks source link

A python script to wrap image read and write for Nifti Images. #60

Closed Unique-Usman closed 4 months ago

Unique-Usman commented 5 months ago

Describe the changes you have made in this PR

I am using this script as the placeholder for the final script, we can continue the discussion from here. @etpeterson

I have quite not understand the available algorithm and where they are located. Kindly give more information. Thank you.

Fixes #58

Checklist

etpeterson commented 5 months ago

I have quite not understand the available algorithm and where they are located. Kindly give more information. Thank you.

Does it make more sense with my comments?

etpeterson commented 5 months ago

@IvanARashid This could use your multi-voxel fitting. Also, if you have ideas about how to best merge with the wrapper and handle inputs/outputs of the wrapper, that would be helpful.

Unique-Usman commented 5 months ago

Kindly help on the https://github.com/OSIPI/TF2.4_IVIM-MRI_CodeCollection/pull/60/commits/6c779170ea70b9d4b98617d04d9043b97c87e995#diff-aea94673dea889894c34fe41c94d6cb8fb0d3373e0677947893ba1e79f4359edR71

IvanARashid commented 5 months ago

Regarding the fitting of multiple voxels. Yes, I was working on a solution for this in the wrapper, although that work was stranded due to the previous git issues. I won't be able to revisit this for a few weeks, but the solution that @etpeterson suggested is a fine placeholder for now. I.e. loop over all the voxels and fit them individually. Once I've implemented this functionality into the wrapper, we can get rid of it in this function.

When this functionality is in the wrapper, the returned result will be parametric map volumes, i.e 2D or 3D images of parameter values. So you could try to implement something similar now. Or we could hold this PR until multi voxel fitting is in the wrapper.

etpeterson commented 5 months ago

You can test it with some nifti files. There are nifti files in our zenodo data repository you can download. You might need to merge them to create a 4D to fully test though.

I would definitely recommend testing it like this, I think it would make things much more clear. It looks like the abdomen or the brain nifti files (nii.gz) look like they're already correct with an associated bval file. https://zenodo.org/records/10696605

Unique-Usman commented 5 months ago

You can test it with some nifti files. There are nifti files in our zenodo data repository you can download. You might need to merge them to create a 4D to fully test though.

I would definitely recommend testing it like this, I think it would make things much more clear. It looks like the abdomen or the brain nifti files (nii.gz) look like they're already correct with an associated bval file. https://zenodo.org/records/10696605

Thank for the review, the signal variable is still missing ?

etpeterson commented 5 months ago

You can test it with some nifti files. There are nifti files in our zenodo data repository you can download. You might need to merge them to create a 4D to fully test though.

I would definitely recommend testing it like this, I think it would make things much more clear. It looks like the abdomen or the brain nifti files (nii.gz) look like they're already correct with an associated bval file. https://zenodo.org/records/10696605

Thank for the review, the signal variable is still missing ?

In the code I sent the signal variable is view. The 4D data is x, y, z, signal. So you want the signal dimension input into the fitting.

Unique-Usman commented 5 months ago

You can test it with some nifti files. There are nifti files in our zenodo data repository you can download. You might need to merge them to create a 4D to fully test though.

I would definitely recommend testing it like this, I think it would make things much more clear. It looks like the abdomen or the brain nifti files (nii.gz) look like they're already correct with an associated bval file. https://zenodo.org/records/10696605

Thank for the review, the signal variable is still missing ?

In the code I sent the signal variable is view. The 4D data is x, y, z, signal. So you want the signal dimension input into the fitting.

I now, understand, I will also test the script now also, I also got to know that the algorithm are located at src/standardized.

etpeterson commented 4 months ago

Just checking on this. Is it waiting on me?

Unique-Usman commented 4 months ago

Just checking on this. Is it waiting on me?

No @etpeterson, I wanted to test it out and see it workings. I will do that. Recently had lot of different school activites.

etpeterson commented 4 months ago

Just checking on this. Is it waiting on me?

No @etpeterson, I wanted to test it out and see it workings. I will do that. Recently had lot of different school activites.

Ok, no problem. Just wanted to make sure.

Unique-Usman commented 4 months ago

@etpeterson I downloaded the Data from the link, I was trying to test it out. Checking the content of brain.bval and brain.bvec, I added the two function read_bval_file and read_bval_file as they were not json.

python3 -m WrapImage.nifti_wrapper brain.nii.gz brain.bvec brain.bval

I am current using the brain.nii.gz, but I Keep getting this error below.

[ Screenshot from 2024-04-10 15-14-10

I am using the algorithms in the src/standardized. I also changed the algorithm couple of time also, but it keep giving almost similar error.

Unique-Usman commented 4 months ago

@etpeterson I downloaded the Data from the link, I was trying to test it out. Checking the content of brain.bval and brain.bvec, I added the two function read_bval_file and read_bval_file as they were not json.

python3 -m WrapImage.nifti_wrapper brain.nii.gz brain.bvec brain.bval

I am current using the brain.nii.gz, but I Keep getting this error below.

[ Screenshot from 2024-04-10 15-14-10

I am using the algorithms in the src/standardized. I also changed the algorithm couple of time also, but it keep giving almost similar error.

@etpeterson bringing your attention to this.

etpeterson commented 4 months ago

@etpeterson I downloaded the Data from the link, I was trying to test it out. Checking the content of brain.bval and brain.bvec, I added the two function read_bval_file and read_bval_file as they were not json. python3 -m WrapImage.nifti_wrapper brain.nii.gz brain.bvec brain.bval I am current using the brain.nii.gz, but I Keep getting this error below. [ Screenshot from 2024-04-10 15-14-10 I am using the algorithms in the src/standardized. I also changed the algorithm couple of time also, but it keep giving almost similar error.

@etpeterson bringing your attention to this.

I would guess that the bvals and bvecs aren't numpy arrays or are the wrong size. Maybe try transposing them? Perhaps also try parsing the bval and bvec files with np.genfromtxt rather than manually. https://numpy.org/doc/stable/reference/generated/numpy.genfromtxt.html

It could also be the input data are not the right size. The data length should match the bvalue length.

I'm just returning from a break so I could try to debug it later this week if you're still having trouble.

Unique-Usman commented 4 months ago

@etpeterson thanks, I will look into this.

Unique-Usman commented 4 months ago

Screenshot from 2024-04-23 13-49-33 Screenshot from 2024-04-23 13-48-16

Unique-Usman commented 4 months ago

@etpeterson, kindly check this out. Thanks. And also welcome back from the break.

Unique-Usman commented 4 months ago

@etpeterson I also do not know why some of the test is failing, I checked it. But, it does not seem related with my changes.

etpeterson commented 4 months ago

@etpeterson I also do not know why some of the test is failing, I checked it. But, it does not seem related with my changes.

Not your fault, it looks like github changed something that broke older python versions. https://raw.githubusercontent.com/actions/python-versions/main/versions-manifest.json

I'm not sure if it's going to be fixed or not, but let's leave it for a day or two to see if it comes back. https://github.com/actions/setup-python/issues/850

etpeterson commented 4 months ago

@etpeterson I also do not know why some of the test is failing, I checked it. But, it does not seem related with my changes.

Not your fault, it looks like github changed something that broke older python versions. https://raw.githubusercontent.com/actions/python-versions/main/versions-manifest.json

I'm not sure if it's going to be fixed or not, but let's leave it for a day or two to see if it comes back. actions/setup-python#850

Ok, this was fixed. Just the one change that I see remaining.

Unique-Usman commented 4 months ago

@etpeterson, the tqdm works by having the information about the number of iteration beforehand. But, in our code. We are using yield function which would not give the information about the number of iteration beforehand. The progress bar will not be displayed if the number of iteration is not known. The solution is to provide a number(if known). Also, if we can not get the number of iteration. We can basically print when it is starting and when it is ending.

etpeterson commented 4 months ago

@etpeterson, the tqdm works by having the information about the number of iteration beforehand. But, in our code. We are using yield function which would not give the information about the number of iteration beforehand. The progress bar will not be displayed if the number of iteration is not known. The solution is to provide a number(if known). Also, if we can not get the number of iteration. We can basically print when it is starting and when it is ending.

We do know the number, it's almost the same code as the generator. len(arr.shape[:n-1]))

Also, it looks like if you wrapped it up into a class it could query that implicitly. https://stackoverflow.com/questions/41985993/tqdm-show-progress-for-a-generator-i-know-the-length-of

Unique-Usman commented 4 months ago

@etpeterson, I already make the value passed to the total argument dynamic.

etpeterson commented 4 months ago

Looks good to me, thanks!