Open kkkkkkkb opened 6 months ago
Hi, according to the graph you have provided, Loss is decreasing at 0-50 epochs and is increasing at 50-100 epochs. 100 epochs is one cycle. According to the graph you have provided, Loss will further decrease at 100-150 epoch, such a trend is normal. This is because the scheduler with 'CosineAnnealingLR' is set by default.
What are the results you are referring to?
Hi, according to the graph you have provided, Loss is decreasing at 0-50 epochs and is increasing at 50-100 epochs. 100 epochs is one cycle. According to the graph you have provided, Loss will further decrease at 100-150 epoch, such a trend is normal. This is because the scheduler with 'CosineAnnealingLR' is set by default.
What are the results you are referring to?
谢谢,原来这是正常的结果,一开始loss抖动太大所以我没看明白。
Hi, according to the graph you have provided, Loss is decreasing at 0-50 epochs and is increasing at 50-100 epochs. 100 epochs is one cycle. According to the graph you have provided, Loss will further decrease at 100-150 epoch, such a trend is normal. This is because the scheduler with 'CosineAnnealingLR' is set by default.
What are the results you are referring to?
给作者一个反馈,我在ISIC2017上跑出了跟您论文上一样的结果,太优秀了!但是在自己数据集上跑完300个epoch,dice竟然0.02,我是一个多分类的任务,可能我的代码还有些问题,包括数据集的处理。
请问如何显示tensorboard,train里面的代码没用相关函数~ 我按照vmunet里train.py修改了,但是出现的结果不太对。 我是这么修改的:
train.py:
engine.py:def train_one_epoch 函数添加writer参数,并在函数内加了句writer.add_scalar('loss', loss, epoch)
最后的结果如下图