Closed Hukongtao closed 5 years ago
@Hukongtao Hi, Could you be more specific about the dataset and your setting? Like How many images do you have, or how the labels are distributed? Did you also try to lower the learning the rate? Also, for training fg cues, it is basically a multi-label classification problem. In your case, class_num=2, it is a binary classification problem, which means either the image contains the object or not. So my concern is that could it be because all your images contain the object and you lack negative samples? Tell me more about the details, which would be helpful to debug your problem.
@ascust Hi, My dataset is about breast cancer. My goal is to segment the cancer area. As you said, each image in my dataset contains a cancer area. Maybe I should add some completely normal pictures. I have tried to reduce the learning rate to 1e-6.
Then you have to consider the balance between the positive samples and negative samples. If every image in your dataset contains the region of interest, it would be impossible for the model to guess what the desired regions look like. I am not sure if this is the case.
@ascust I have 4030 pictures, 2006 of them have cancer, 2024 of them have no cancer. I have run train_bg_cues.py and train_fg_cues.py, it's OK. Loss value has been falling. But when I run train_SEC.py, It happened SEC_seed_loss=nan SEC_constrain_loss=nan SEC_expand_loss=nan. I have tried different learning rate.
@ascust Do I need to modify other parameters?
@Hukongtao I suggest you dig into the code a little bit. Your dataset seems fine with the balanced samples and you successfully trained the fg and bg cues. So the problem is only in SEC model.
First of all I suggest you have a look at the dataproducer "SECTrainingIter.py", and the function "def _produce_data(self):" this function basically shows how you read the images and labels including the cues obtained before. You can simply write a script to iterate the images and the cues and see whether the cues are plausible. Normal and plausible cues usually highlight the region of interest.
Second of all, I suggest you tried different combination of losses. Since SEC has three losses, Seed, Expand and Constraint, it is easy to see how each of the losses affects the training. You may start by only using Seed loss. If it works, try adding the rest.
@ascust Is it possible that there is a problem with my data set? Because I just have 2 classes, cancer area and normal area. cancer area is my object and normal area is background. So half of the data have no objects. and I found you write 'class_num = conf.CLASS_NUM - 1 # exclude bg class' in your code.But actually it just has one object. It does not need classification at all.
@Hukongtao I think it should be fine. Because for fg and bg cues, it is basically a binary classification problem, either the image containing the object or not. class_num for fg and bg is just one since it is either 0 or 1. These are consistent. So like what I said, I strongly suggest you start from the fg cues and bg cues learnt and inspect them to see whether they capture the important regions. Then you might narrow down the problem to some very specific issues.
@ascust How can I know whether they capture the important regions?Visualizing them?
Yes. Juse visualize them and see if the regions are plausible.
On Jan 24, 2019, at 4:44 PM, Hukongtao notifications@github.com wrote:
@ascust How can I know whether they capture the important regions?Visualizing them?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
I just have one object. So My CLASS_NUM=2. After I run train_fg_cues.py and train_bg_cues.py, I run train_SEC.py, and I got Epoch[1] Batch[10] Speed: 20.38 samples/sec SEC_seed_loss=nan SEC_constrain_loss=nan SEC_expand_loss=nan time=17/01/2019--22:11:03. Why this happened?