Closed mirtatae closed 5 years ago
Hi, Have you signed in with your own account? Having logged in your account, try to find this phrasal keyword :
Multi-Atlas Labeling Beyond the Cranial Vault - Workshop and Challenge In the file tab, choose Abdomen and download RawData.zip. Nii format has an easy way to be read by some existing code. The resolution and sizes are fixed, you cannot change them. And as for CT scans, they might not be very pleasable. good Luck
On Fri, Jul 19, 2019 at 4:40 AM Amirtahà Taebi notifications@github.com wrote:
Dear Shima,
Can you please provide more instructions on how to find the training data from https://www.synapse.org/
In addition, I see that the input image format is nii. Is this the only acceptable format? Is there any limitation on the input image size and resolution?
Thanks, Amirtaha
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/ShimaRafiei-IUT/3D-2D-FCN/issues/3?email_source=notifications&email_token=AJOWJGJKJ6YC3MOYGJEIA5TQAEA7BA5CNFSM4IFA4M5KYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4HAER37A, or mute the thread https://github.com/notifications/unsubscribe-auth/AJOWJGOTEZ4F5ZLFRIWZLF3QAEA7BANCNFSM4IFA4M5A .
Hi Shima,
For the training data, I will give it a try and post the result here.
About the input image, I found in your paper that the image size was 512x512x38. Did you use the same images to train the network or did you downsample them? I am wondering if the 512x512 images were used. In addition, do you remember the image resolution? Was it around 0.25 mm?
Thank you very much, Amirtahà
Every time a bunch of slices with the size of 512x512 x 38 is extracted from a scan for training. In synapse dataset, we have 30 persons with different sizes. Considering the importance of balancing, I extracted these areas mostly around the liver and some around a part of the liver and some around where there is no liver to prepare data for my network. what do you mean by downsampling? the resolution is between a range. here are some features of this database: •Volume : (512×512×85)~(512×512×198) voxel •Fields of view of (280×280×280)~(500×500×650) mm3 •Resolution of (0.54×0.54×5.0)~(0.98×0.98×2.5) mm3
On Fri, Jul 19, 2019 at 11:31 AM Amirtahà Taebi notifications@github.com wrote:
Hi Shima,
For the training data, I will give it a try and post the result here.
About the input image, I found in your paper that the image size was 512 51238. Did you use the same images to train the network or did you downsample them? I am wondering if the 512*512 images were used. In addition, do you remember the image resolution? Was it around 0.25 mm?
Thank you very much, Amirtahà
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ShimaRafiei-IUT/3D-2D-FCN/issues/3?email_source=notifications&email_token=AJOWJGPZR42ZPMBVTGD6YK3QAFRGFA5CNFSM4IFA4M5KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD2KYX2Q#issuecomment-513117162, or mute the thread https://github.com/notifications/unsubscribe-auth/AJOWJGI6L2IXJNPUDWOUMA3QAFRGFANCNFSM4IFA4M5A .
Dear Shima,
Thanks for all the information. By downsampling, I actually meant to resize the images. P.S.: I also could download the training data following the instructions that you provided.
Thanks, Amirtaha
Dear Shima,
Can you please provide more instructions on how to find the training data from https://www.synapse.org/
In addition, I see that the input image format is nii. Is this the only acceptable format? Is there any limitation on the input image size and resolution?
Thanks, Amirtaha