bermanmaxim / LovaszSoftmax

Code for the Lovász-Softmax loss (CVPR 2018)
http://bmax.im/LovaszSoftmax
MIT License
1.38k stars 269 forks source link

Tensorflow implemention is different from the pytorch version #24

Open pdoublerainbow opened 5 years ago

pdoublerainbow commented 5 years ago

Hi, thanks for your job. When I read the code, I find possible problem here, This is the tensorflow version: tf.tensordot(errors_sorted, tf.stop_gradient(grad), 1, name="lossclass{}".format(c)) This is ht pytorch version: loss = torch.dot(F.relu(errors_sorted), Variable(grad)) The tensorflow version have no no Nonlinear,but pytorch have it. I have no idea about which one is right or which one is better. A more question here, if I want to understand submodular completely, what should I do, do you have some link or book to recommend to me. Thanks!

pdoublerainbow commented 5 years ago

I have find what mistake I have made, you just need to answer the second question now. It is too stupid, I'm sorry about that.