Open sravan-thumma opened 6 months ago
Hey, thanks for reaching out.
Unfortunately the pyenv I was using has a lot of libraries (pytorch included) so it might not be the best to share it here for others to use. The code (https://github.com/rtarun9/HFormer-Low-Dose-CT-Denoiser/blob/db22295f62b598d6a5da794076bea6bd34c1ad9b/hformer_vit/tf_data_importer.py#L65) expects some directory that has a bunch of images with FD and QD in the name.
So, regardless of your own directory structure, set the parent path in code, and as long as the images have FD or QD in the name it will consider them to be full dose / quater dose (i.e ground truth / noisy images) respectively.
There is no differentiation between 3mm and 1mm images from the model / code point of view.
I received this error and it prompted that the dataset is missing a dimension. Do you know how to solve it?
tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found. (0) Invalid argument: input must be 4-dimensional[736,64,1] [[{{node patch_extractor/ExtractImagePatches}}]] [[IteratorGetNext]] (1) Invalid argument: input must be 4-dimensional[736,64,1] [[{{node patch_extractor/ExtractImagePatches}}]] [[IteratorGetNext]] [[IteratorGetNext/_2]] 0 successful operations. 0 derived errors ignored. [Op:__inference_train_function_5942]
Function call stack: train_function -> train_function
@chen990702, this model does not use the project data and only the image data (You can see both the types of data here : https://aapm.app.box.com/s/eaw4jddb53keg1bptavvvd1sf4x3pe9h/folder/144594475090)
I used these datasets and also reported the error above
@chen990702, In the folder / directory path you are setting in code, can you remove the non-image (i.e projection data)? The image data has a .ima extension, but personally do not know what the projection data extension is.
I didn't see this, isn't the dataset suffix. dcm?
I believe the projection data is .dcm, while the image data is .ima @chen990702
I understand, I downloaded the dataset incorrectly. Thank you
After replacing the dataset, it shows that the dataset cannot be found
training dataset <ShuffleDataset shapes: ((None, 64, 64, None), (None, 64, 64, None)), types: (tf.float32, tf.float32)>
Epoch 1/200
Traceback (most recent call last):
File "hformer_train_cbam.py", line 95, in
@chen990702 can you let me know if the https://github.com/rtarun9/HFormer-Low-Dose-CT-Denoiser/blob/db22295f62b598d6a5da794076bea6bd34c1ad9b/hformer_vit/train/hformer_train.py#L81 line is modified to match your dataset path?
def main(): training_dataset = load_training_tf_dataset(low_dose_ct_training_dataset_dir='/root/autodl-tmp/D45_3mm/FD_3mm_sharp/L506', load_as_patches=True, load_limited_images=True, num_images_to_load=2)
The folder / path must have both FD and QD images, but your path looks like it only have FD?
He indicated that the dicm file is missing pydicom.errors.InvalidDicomError: File is missing DICOM File Meta Information header or the 'DICM' prefix is missing from the header. Use force=True to force reading.
[[{{node EagerPyFunc}}]]
[[IteratorGetNext]]
[[IteratorGetNext/_2]]
0 successful operations. 0 derived errors ignored. [Op:__inference_train_function_6883]
Function call stack: train_function -> train_function
@chen990702 can you describe your directory structure for the images? This error is a bit strange as I have never come across it with the .ima images...
I have encountered this graphic dimension issue, have you encountered it?
Traceback (most recent call last):
File "hformer_train.py", line 94, in
Function call stack: train_function -> train_function
@chen990702 could up upload a screenshot showcasing your dataset directory structure, and upload a single image that you use for testing?
This is a screenshot of the dataset directory and the IMA file 数据集目录.docx
IMA file cannot be uploaded
@chen990702 I have seen the document, and there is a chance that because of the extra / additional files in the autodl directory the issue is being caused.
What you can do, is try to print the result of https://github.com/rtarun9/HFormer-Low-Dose-CT-Denoiser/blob/db22295f62b598d6a5da794076bea6bd34c1ad9b/hformer_vit/tf_data_importer.py#L87 to see if any extra / non ima paths are being filled in these variables.
Hello, how to test after training?
@chen990702 , once the training is done (and you have the trained model files) , you can run https://github.com/rtarun9/HFormer-Low-Dose-CT-Denoiser/blob/main/hformer_vit/test/hformer_test.ipynb.
In the line loaded_model = model.load_weights('../data/weights/hformer_64_channel_custom_loss_epochs_114.h5'), use your own trained model file. Be sure to read the entire model test file to change dataset path wherever required.
I getting my error , when trying with python 3.10 and also Cannot understand the dataset structure , I had downloaded partial dataset and using 3mm 45D folder , Can you explain the setup of your environment with all the version of the libraries , So that I can replicate the same and make this work .