Closed patricegaofei closed 2 years ago
Kind regards
Thank you very much for you help! I have modified the test code as suggested and it worked. However, the saved images are all white. Could it be due to the fact that the transformation applied to the images during training are not the same with that applied to the images during testing? I have noticed that there are some differences between training dataloader and testing dataloader. Could you please help me fix or clarify this situation?
In case, noise level = 0
transformation
It has nothing to do with transformation setting. " Noise level "only some artificial displacement noise is added during the training to prove that ReGgan can work well even if there is noise in the training data.
At present, there are two possible reasons for this. (1) There are some problem in the training process. It is recommended to use visdom to observe the output image while training. (2) if there is no problem in the training, it is that the value distribution of the data is wrong. The results are all white. It doesn't mean that the values are very large.
It is recommended that you save fake B is changed to real B or real A。 If real B and real_ A there is no problem with saving. That is caused by reason 1. On the contrary, it is caused by reason 2. You need to check the data carefully.
I am very grateful for your time, patience and help. After checking carefully, the problem was due to the data distribution. Specifically, I had not normalized both the input and target data so as to have [-1, 1]. Applying normalization solved the problem. That is, the obtained testing results were not white. Besides, I still can't observe the training process through Visdom. I am running the codes on a server, and after opening (http://localhost:6019) in a browser, I can only see a blue interface without any curves. It is worth noting that I ran (python -m visdom.server -p 6019) before starting the training process. An example is shown below.
Please, any suggestions?
Congratulations on solving the previous problem. This is because the project name is defined in the Yaml file settings. Therefore, just do as shown below.
It worked, thank you very much! Actually, the results are not so good. I am working on ultrasound image translation. Any suggestions on some parameters to be adjusted? I have trained the model for 200 epochs.
I don’t know much about ultrasound images. But the suggestions I can give include: 1.200 epoch is not the key point. There are many factors to consider, such as the amount of your data, the quality of the data, the difficulty of translating the image itself, etc. (the amount of data itself is small and the difficulty is high, which means more epoch are needed. Ensure that the two types of data you convert meet the independent and identically distributed characteristics). You can join the curve of S_R loss and observe it in visdom. If the curve keeps falling steadily, generally speaking, there is no problem.
I am still trying. Thank you very much for your suggestions and help
I have noticed that there is a "save_deformation" subfunction within CycTrainer.py. Please, what is it for? Besides, how can CycTrainer.py be modified in such a way that the results can be saved during testing. I am sorry, I am not very familiar with Pytorch.
Thank you for your time and help.
Kind regards