chaos-moon / paper_daily

One paper a day, keep laziness away.
MIT License
6 stars 3 forks source link

Flooding Trick #4

Open zc12345 opened 1 year ago

zc12345 commented 1 year ago

Do We Need Zero Training Loss After Achieving Zero Training Error?

2002.08709

思路

image

实现

code

outputs = model(inputs) 
loss = criterion(outputs, labels) 
flood = (loss-b).abs()+b # This is it! 
optimizer.zero_grad() 
flood.backward() 
optimizer.step()

b值选取

yaoyz96 commented 1 year ago

👍有机会实际应用一下,看看效果