GoGoDuck912 / Self-Correction-Human-Parsing

An out-of-box human parsing representation extractor.
MIT License
1.05k stars 236 forks source link

Using soft prediction #62

Open rose-jinyang opened 3 years ago

rose-jinyang commented 3 years ago

Hello How are you? Thanks for contributing to this project. I am training a new model with your project. But I think that there is an issue in the below part of "train.py" script.

image

What about revising this part as the following?

image

KudoKhang commented 2 years ago
image

That's right, it has a problem. And here's my code

jackylu0124 commented 2 years ago
image

That's right, it has a problem. And here's my code

Could you please explain the purpose/reasoning behind the for-loop in the code you showed in the screenshot? I think rose's solution makes more sense because in the code you provided, it would only work when the batch size is set to 2 and won't work for other batch size numbers since the two tensors passed into moving_average() need to have the same dimensions. I think the correct solution should be something like the one below, which is essentially rose's version. Let me know what you think, thanks!

soft_parsing = soft_preds[0][-1]
soft_edges = soft_preds[1][-1]
soft_preds = soft_parsing
duACGN commented 1 year ago
image

That's right, it has a problem. And here's my code

Could you please explain the purpose/reasoning behind the for-loop in the code you showed in the screenshot? I think rose's solution makes more sense because in the code you provided, it would only work when the batch size is set to 2 and won't work for other batch size numbers since the two tensors passed into moving_average() need to have the same dimensions. I think the correct solution should be something like the one below, which is essentially rose's version. Let me know what you think, thanks!

soft_parsing = soft_preds[0][-1]
soft_edges = soft_preds[1][-1]
soft_preds = soft_parsing

Thank you for the method you provided, but I trained LIP data in this way. The mIoU does not reach 58.62 of the author's paper. In fact, I trained only 57.98 mIoU. Do you have any better method?

MrAriten commented 1 year ago
image

That's right, it has a problem. And here's my code

Could you please explain the purpose/reasoning behind the for-loop in the code you showed in the screenshot? I think rose's solution makes more sense because in the code you provided, it would only work when the batch size is set to 2 and won't work for other batch size numbers since the two tensors passed into moving_average() need to have the same dimensions. I think the correct solution should be something like the one below, which is essentially rose's version. Let me know what you think, thanks!

soft_parsing = soft_preds[0][-1]
soft_edges = soft_preds[1][-1]
soft_preds = soft_parsing

Thank you for the method you provided, but I trained LIP data in this way. The mIoU does not reach 58.62 of the author's paper. In fact, I trained only 57.98 mIoU. Do you have any better method?

Are your hyperparameter settings such as batchsize and learning rate the same as those of the author?