Open zygmcc opened 5 years ago
On resnet18 I fell sad
On resnet18 I fell sad Hello, I think it is not err in visdom. First,I repeat training the resnet18 with batch_size=16 three times.I found: it is unstable. So,I guee the reason may be as follows: 1.Due to the train data is small. And the train data shuffle.when batch=16,is it small 2.the resnet18 is not robust enough. I just change the batch_size=64 it is relatiuvely stable. so you bug just is the training error.
@zygmcc would you mind share the way to solve the 'tensor(acc=0.985,dtype=...) is not JSON serializable' problem ?
About main.py:line 229 In this part: `vis.plot_many_stack({'train_loss': train_loss.value()[0], \ 'val_loss': val_loss.value()[0]}, win_name="Loss")
vis.plot_many_stack({'train_acc': train_acc.value()[0], \ 'val_acc': val_acc.value()[0]}, win_name='Acc')`
When using GPU training,var train_acc and val_acc should use
.cpu()
to load in memory,So I change source code like this:if opt.use_gpu: val_acc.add(v_accuracy.cpu()) else: val_acc.add(v_accuracy)
Andif opt.use_gpu: train_acc.add(epoch_acc.cpu()) else: train_acc.add(epoch_acc)