Open rose-jinyang opened 3 years ago
That's right, it has a problem. And here's my code
That's right, it has a problem. And here's my code
Could you please explain the purpose/reasoning behind the for-loop in the code you showed in the screenshot? I think rose's solution makes more sense because in the code you provided, it would only work when the batch size is set to 2 and won't work for other batch size numbers since the two tensors passed into moving_average()
need to have the same dimensions. I think the correct solution should be something like the one below, which is essentially rose's version. Let me know what you think, thanks!
soft_parsing = soft_preds[0][-1]
soft_edges = soft_preds[1][-1]
soft_preds = soft_parsing
That's right, it has a problem. And here's my code
Could you please explain the purpose/reasoning behind the for-loop in the code you showed in the screenshot? I think rose's solution makes more sense because in the code you provided, it would only work when the batch size is set to 2 and won't work for other batch size numbers since the two tensors passed into
moving_average()
need to have the same dimensions. I think the correct solution should be something like the one below, which is essentially rose's version. Let me know what you think, thanks!soft_parsing = soft_preds[0][-1] soft_edges = soft_preds[1][-1] soft_preds = soft_parsing
Thank you for the method you provided, but I trained LIP data in this way. The mIoU does not reach 58.62 of the author's paper. In fact, I trained only 57.98 mIoU. Do you have any better method?
That's right, it has a problem. And here's my code
Could you please explain the purpose/reasoning behind the for-loop in the code you showed in the screenshot? I think rose's solution makes more sense because in the code you provided, it would only work when the batch size is set to 2 and won't work for other batch size numbers since the two tensors passed into
moving_average()
need to have the same dimensions. I think the correct solution should be something like the one below, which is essentially rose's version. Let me know what you think, thanks!soft_parsing = soft_preds[0][-1] soft_edges = soft_preds[1][-1] soft_preds = soft_parsing
Thank you for the method you provided, but I trained LIP data in this way. The mIoU does not reach 58.62 of the author's paper. In fact, I trained only 57.98 mIoU. Do you have any better method?
Are your hyperparameter settings such as batchsize and learning rate the same as those of the author?
Hello How are you? Thanks for contributing to this project. I am training a new model with your project. But I think that there is an issue in the below part of "train.py" script.
What about revising this part as the following?