Open GuoxingYan opened 2 years ago
During training, what is the factor in Equation 12 equal to?
When calculating the match loss, is the method of simply assigning a nearest predictor to each label?
Hello Guoxing, do you know how to construct the permutation matrixs for model training? It seems we need to first decide the order of vertices and then made the matrix, right? But how to decide the order?
@zorzi-s @GuoxingYan @hehongjie I saw that in prediction.py in line 42, 46,53
model.train()
It means that model is at train model not eval?? is it right? If eval must change mode to eval? If training how to train it? Many thank
@hehongjie when I set all model to eval(), the result very poor!!
Mean IoU: 0.0008893926959595389
Mean C-IoU: 0.0007304682105452664
if want to evaluate model must train to eval model, if use train model, model can update, so not truth, this is my limited knowledge. if wrong, many sorry for you.
@phamkhactu Yes, you are right. You may refer to this answer.https://stackoverflow.com/questions/51433378/what-does-model-train-do-in-pytorch
@phamkhactu Yes, you are right. You may refer to this answer.https://stackoverflow.com/questions/51433378/what-does-model-train-do-in-pytorch
@hehongjie as you can see that, the output validation from eval and train model very diff, but if i use train model model can update weight, can it cause wrong prediction?? I have another question that: with your design model public from your research team, the model can learn to classifier direction?
Like image, I expected that 2 polygon are diff, Does the model R2U_Net
learn this?
@zorzi-s @GuoxingYan @hehongjie I saw that in prediction.py in line 42, 46,53
model.train()
It means that model is at train model not eval?? is it right? If eval must change mode to eval? If training how to train it? Many thank
@phamkhactu @hehongjie Thank you for the interest in our method! We use model.train() during inference in order to force the batch normalization layers to use batch statistics instead of the mean and variance estimated during training. The model weights are not updated during inference. Please also refer to this conversation: link.
@zorzi-s @GuoxingYan @hehongjie I saw that in prediction.py in line 42, 46,53
model.train()
It means that model is at train model not eval?? is it right? If eval must change mode to eval? If training how to train it? Many thank
@phamkhactu @hehongjie Thank you for the interest in our method! We use model.train() during inference in order to force the batch normalization layers to use batch statistics instead of the mean and variance estimated during training. The model weights are not updated during inference. Please also refer to this conversation: link.
@zorzi-s thank you. As I mention in above question: I have another question that: with your design model public from your research team, the model can learn to classifier direction?
Like image, I expected that 2 polygon are diff, Does the model R2U_Net
learn this?
For you, Your model can do that?? In my opition it can, because model learn polygon points, so direction make difference feature extraction. How about you? Many Thanks
Has anyone figured out how to calculate the permutation matrix during training using the Sinkhorn algorithm?
@GuoxingYan have you figured out how to calculate the partial assignment during training using the Sinkhorn algorithm?
@zorzi-s How can we confirm the originality of your paper if you don't provide your training script?
Do we have the repo for the new paper, Re:PolyWorld, @zorzi-s ? https://openaccess.thecvf.com/content/ICCV2023/papers/Zorzi_RePolyWorld_-_A_Graph_Neural_Network_for_Polygonal_Scene_Parsing_ICCV_2023_paper.pdf
Hi @zorzi-s, much appreciated for sharing your interesting and knowledgeable work! I have a few questions I'd like to ask you.
When calculating the matched loss function, whether the true value needs to be in a one-to-one correspondence with the predicted value. Indicates whether the dustbin in the superglue is used. Check whether the training loss can be obtained by referring to the following code: ` dists = cdist(kp1_projected, kp2_np)
loss = [] for i in range(len(all_matches[0])): x = all_matches[0][i][0] y = all_matches[0][i][1] loss.append(-torch.log( scores[0][x][y].exp() )) # check batch size == 1 ?