Open galsk87 opened 4 years ago
Sorry, I am not sure I have gotten your idea. Here are some explanations that may be helpful. Firstly, you do not need to use the BCE loss separately because it is calculated in the overall loss function "MultiView_all_loss". Secondly, the input "flag" is used for denoting whether the image is labeled (flag == 1 means the image is labeled). All the labeled and unlabeled images in one batch will be fed to the loss function altogether.
I will add my training function so maybe it is clearer: What i'm asking is how to input the networks outputs to the loss function, this is what i have done, but this does not work so i'm probably doing it wrong.
for batch_idx in range(len(trainloader)): try: inputs_x, au = labeled_train_iter.next() except: labeled_train_iter = iter(trainloader) inputs_x, au = labeled_train_iter.next() if epoch>10: try: inputs_u = next(unlabeled_train_iter)
except:
# if 'count' in locals():
# print(count)
# count = 0
unlabeled_train_iter = iter(unlabledloader)
unlabeled_train_iter =
islice(unlabeled_train_iter,int(len(unlabledloader)(0.05(epoch-3)))) inputs_u = next(unlabeled_train_iter) if epoch<25 or int(opt.bs4(0.05(epoch-3)))< opt.bs2: choosing_indices = random.sample(list(range(inputs_u.shape[0])), int(opt.bs2(0.05*(epoch-3)))) inputs_u = inputs_u[choosing_indices] optimizer.zero_grad() if use_cuda: inputs_x, au = inputs_x.cuda(), au.cuda(non_blocking=True) if epoch>10: inputs_u = inputs_u.cuda()
weight1, bias1, weight2, bias2, feat1, feat2, output1, output2,
output = net(inputs_x,gcn_on=False if epoch<20 else True) if epoch>10: with torch.no_grad(): weight1_u, bias1_u, weight2_u, bias2_u, feat1_u, feat2_u, output1_u, output2_u, output_u = net(inputs_u) sup_loss, loss_pred, loss_pred1, loss_pred2, loss_multi_view, loss_similar = au_criterion(au, output, output1, output2, weight1, bias1, weight2, bias2, feat1, feat2, flag=torch.ones([1]).cuda()) if epoch>10: unsup_loss, loss_pred, loss_pred1, loss_pred2, loss_multi_view, loss_similar = au_criterion(None,output_u, output1_u, output2_u, weight1_u, bias1_u, weight2_u, bias2_u, feat1_u, feat2_u, flag=torch.zeros([1])) loss = sup_loss + unsup_loss else: loss = sup_loss loss.backward() optimizer.step()
On Tue, Mar 17, 2020 at 11:49 AM Edson-Niu notifications@github.com wrote:
Sorry, I am not sure I have gotten your idea. Here are some explanations that may be helpful. Firstly, you do not need to use the BCE loss separately because it is calculated in the overall loss function "MultiView_all_loss". Secondly, the input "flag" is used for denoting whether the image is labeled (flag == 1 means the image is labeled). All the labeled and unlabeled images in one batch will be fed to the loss function altogether.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/nxsEdson/MLCR/issues/2#issuecomment-599977264, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGZQGMSIMRFJHZHIF4A5XZTRH5BTJANCNFSM4LL2SJLA .
Hello, I think just use the output of the network as the input is okay. The failure may be due to the input "flag". Because the size of "flag" should be the same as the batch size other than only one number. It is used to denote whether the image is labelled for every image in the batch.
so in fact flag is a binary tensor for labeled/unlabeled example?
On Wed, Mar 18, 2020 at 6:05 AM Edson-Niu notifications@github.com wrote:
Hello, I think just use the output of the network as the input is okay. The failure may be due to the input "flag". Because the size of "flag" should be the same as the batch size other than only one number. It is used to denote whether the image is labelled for every image in the batch.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/nxsEdson/MLCR/issues/2#issuecomment-600413790, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGZQGMVRNNKJDTH3KJ4ULUTRIBCBDANCNFSM4LL2SJLA .
Yes, exactly~
@nxsEdson Hello, I have another question about the loss function. What's the meaning of self.select_sample
?. I can't see the definition of it, but you directly use this variable in you code.
@nxsEdson Hello, I have another question about the loss function. What's the meaning of
self.select_sample
?. I can't see the definition of it, but you directly use this variable in you code.
Same question. Have you solved this problem?
Sorry for the late reply, this is one useless testing of my experiments before and I forget to clean it. You can directly use the default parameter.
I will add my training function so maybe it is clearer: What i'm asking is how to input the networks outputs to the loss function, this is what i have done, but this does not work so i'm probably doing it wrong. for batch_idx in range(len(trainloader)): try: inputs_x, au = labeled_train_iter.next() except: labeled_train_iter = iter(trainloader) inputs_x, au = labeled_train_iter.next() if epoch>10: try: inputs_u = next(unlabeled_train_iter) # count = count + 1 except: # if 'count' in locals(): # print(count) # count = 0 unlabeled_train_iter = iter(unlabledloader) unlabeled_train_iter = islice(unlabeled_train_iter,int(len(unlabledloader)(0.05(epoch-3)))) inputs_u = next(unlabeled_train_iter) if epoch<25 or int(opt.bs4(0.05(epoch-3)))< opt.bs2: choosing_indices = random.sample(list(range(inputs_u.shape[0])), int(opt.bs2(0.05*(epoch-3)))) inputs_u = inputs_u[choosing_indices] optimizer.zero_grad() if use_cuda: inputs_x, au = inputs_x.cuda(), au.cuda(non_blocking=True) if epoch>10: inputs_u = inputs_u.cuda() weight1, bias1, weight2, bias2, feat1, feat2, output1, output2, output = net(inputs_x,gcn_on=False if epoch<20 else True) if epoch>10: with torch.no_grad(): weight1_u, bias1_u, weight2_u, bias2_u, feat1_u, feat2_u, output1_u, output2_u, output_u = net(inputs_u) sup_loss, loss_pred, loss_pred1, loss_pred2, loss_multi_view, loss_similar = au_criterion(au, output, output1, output2, weight1, bias1, weight2, bias2, feat1, feat2, flag=torch.ones([1]).cuda()) if epoch>10: unsup_loss, loss_pred, loss_pred1, loss_pred2, loss_multi_view, loss_similar = au_criterion(None,output_u, output1_u, output2_u, weight1_u, bias1_u, weight2_u, bias2_u, feat1_u, feat2_u, flag=torch.zeros([1])) loss = sup_loss + unsup_loss else: loss = sup_loss loss.backward() optimizer.step() … On Tue, Mar 17, 2020 at 11:49 AM Edson-Niu @.***> wrote: Sorry, I am not sure I have gotten your idea. Here are some explanations that may be helpful. Firstly, you do not need to use the BCE loss separately because it is calculated in the overall loss function "MultiView_all_loss". Secondly, the input "flag" is used for denoting whether the image is labeled (flag == 1 means the image is labeled). All the labeled and unlabeled images in one batch will be fed to the loss function altogether. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub <#2 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGZQGMSIMRFJHZHIF4A5XZTRH5BTJANCNFSM4LL2SJLA .
Could you please share your training process code???
Yes, exactly~
I was literally struck with this for days, untill I saw your comment. Thanks for helping.
What are the wanted inputs and outputs?
When training with unlabeled data is it called twice? It is not totally clear from the code.
Thanks.