Open dtiny opened 5 years ago
我也出现了同样的问题
same problem +1
Have you found the inference code
把1改过来,如不行再降低初始学习率;2可改可不改
出现同样的问题+1
代码有问题:
- loss写的不对,在难例挖掘那块
- gray区域在loss里面也没有使用
把1改过来,如不行再降低初始学习率;2可改可不改
你好,请教具体怎么改呀,谢谢~~
有人解决这个问题了吗
代码有问题:
- loss写的不对,在难例挖掘那块
- gray区域在loss里面也没有使用
把1改过来,如不行再降低初始学习率;2可改可不改
你好,请教具体怎么改呀,谢谢~~
改为:
torch.ones_like(pred_score_softmax[:, 1, :, :]).add(1))
代码有问题:
- loss写的不对,在难例挖掘那块
- gray区域在loss里面也没有使用
把1改过来,如不行再降低初始学习率;2可改可不改
你好,请教具体怎么改呀,谢谢~~
改为:
torch.ones_like(pred_score_softmax[:, 1, :, :]).add(1))
这样修改还是有NAN
the same problem when training.
@becauseofAI Any suggestions?
Did anyone find any solution?
Anyone found the solution to this problem?
无力吐槽,这代码放出来专门坑人的
Try reducing the learning rate (variable name = param_learning_rate) to 0.01 in the configuration file. If you are using V2, it should be configuration_10_320_20L_5scales_v2.py. This worked for me to train for 2000000 training loops. EDIT: I see that user 120276215 has already advised the same. So credits to him/her.
Provided code : python configuration_10_320_20L_5scales_v2.py Provided data : widerface_train_data_gt_8.pkl At begining, train loss converge normal. iteration times 3400, loss was divergent to nan.
How to solve this problem.