liaohaofu / adn

ADN: Artifact Disentanglement Network for Unsupervised Metal Artifact Reduction
Other
164 stars 39 forks source link

Preprocessing error #2

Closed 2181382zht closed 5 years ago

2181382zht commented 5 years ago

Thank you very much for sharing the source code, which is very useful for me. However, I reported an error when preprocessing the data, which did not make the data classified, but just created two empty files:train and test

liaohaofu commented 5 years ago

Hi 2181382zht, thank you for reporting this issue. Could you help me reproduce this problem with more details? For example, which preprocessing code were you using, spineweb or deeplesion or both? What errors did you get? A copy of the errors from your system would be very helpful!

2181382zht commented 5 years ago

run python prepare_spineweb.py error: FileNotFoundError: [WinError 3] 系统找不到指定的路径。: 'data/spineweb\artifact' run prepare_deep_lesion error: if ~isfolder(output_dir) but my matlab is 2016 run demo.py error: assert model_type in CHECKPOINTS, f"{model_type} model not supported" ^ SyntaxError: invalid syntax I have downloaded the data set and the pre-training model in the demo, but I still reported an error. I feel that my path does not correspond to yours. I hope you can help to solve it.

liaohaofu commented 5 years ago

Hi, based on the error message you provided, I think this issue arises from the system path difference between Windows and Linux (Windows uses "\" as path separator while Linux uses "/"). Currently, our code only supports Ubuntu. We will include Windows support in the future but for immediate use, please run the code on a machine that satisfies the requirements in the Prerequisites section. Thanks!

2181382zht commented 5 years ago

Thank you very much for your guidance and reply. I will try it immediately. Thank you

2181382zht commented 5 years ago

I downloaded spineweb data set, but I found on the web page is a 2G data set size, but the actual download only 400 MB, this is normal, and this is the place the path of the data set, is put in - zip file directly or placed it out, I will now - zip file directly in the path, or only to create two empty file, train and test.Extract the spine-. Zip files. The extracted files should be located under path_to_Spineweb/spine-.Thank you very much

liaohaofu commented 5 years ago

Hi, if your downloaded file size does not match the original size, then your download is not complete. Otherwise, you should be able to unzip it.

You can place the unzipped data (which should have names like "spine-1", "spine-2", etc.) under any folder and the command assumes the folder path is "path_to_Spineweb". Replace "path_to_Spineweb" to the actual path and run the command in the Readme file.

I have updated the Readme file for clarification.

2181382zht commented 5 years ago

After I downloaded the complete spineweb data set and preprocessed the data in the way you said, there were still problems.What I found wrong with the code is in this statement: volume_files = read_dir(patient_dir, predicate=lambda x: x.endswith("mhd"), recursive=True) I looked at the spineweb data set I downloaded, patient only had.lml and.nii files, but according to this code, it should be.mhd files, I wonder if there is any other processing to be done on the data set? thank you

liaohaofu commented 5 years ago

Ha, this is a bug. Spineweb seems to have changed the format of their data. Their old dataset uses "mhd" while the latest one uses "nii.gz". I have updated the preprocessing code to support both formats. Thanks for pointing out!

2181382zht commented 5 years ago

train. Py is missing if name == 'main':otherwise, Runtime error will occur thank you There is no problem with spineweb code

2181382zht commented 5 years ago

预处理deep_lession数据集时也发上报错 :引用了不存在的字段 'test_indices'。我看了一下,在datset.yaml里的确没有这个。

liaohaofu commented 5 years ago

Hi, thank you for reporting this issue. I will check the code and get back to you over the weekend.

2181382zht commented 5 years ago

Is there any requirement for gpu training? My hardware is 2080ti and memory is 11G, but when I trained the 11th epoch, there was a problem of out of memory

2181382zht commented 5 years ago

RuntimeError: CUDA error: unspecified launch failure

liaohaofu commented 5 years ago

Hi, the deep_lesion dataset preprocessing issue has been fixed. Please remove the previous preprocessed data and run the preprocess_deep_lesion.m code again.

For the memory and CUDA issue, I think it is most likely because of your CUDA/Pytorch/GPU driver settings. For memory problem, you may have started other GPU programs when training ADN. To check if it is the case, you may run nvidia-smi to monitor the GPU memory usage. For the CUDA error, please refer some online posts about the problem such as https://github.com/pytorch/pytorch/issues/2567. The most probably fix is to reboot your system / reinstall pytorch / reinstall GPU driver. Please let me know if the problem still exists.

BTW, please start a new post when you have different issues. Thanks!

liaohaofu commented 5 years ago

The preprocessing issues with DeepLesion and Spineweb are fixed.