Open zc12345 opened 1 year ago
2002.08709
outputs = model(inputs) loss = criterion(outputs, labels) flood = (loss-b).abs()+b # This is it! optimizer.zero_grad() flood.backward() optimizer.step()
In practice, we can search for the optimal flood level by performing the exhaustive search in parallel.
👍有机会实际应用一下,看看效果
Do We Need Zero Training Loss After Achieving Zero Training Error?
2002.08709
思路
实现
code
b值选取
In practice, we can search for the optimal flood level by performing the exhaustive search in parallel.