Ma-Dan / keras-yolo4

A Keras implementation of YOLOv4 (Tensorflow backend)
MIT License
381 stars 175 forks source link

What is the loss after training #7

Open lanyufei opened 4 years ago

lanyufei commented 4 years ago

I trained with my own data set, and loss felt high and converged slowly

Ma-Dan commented 4 years ago

如果是没有加载预训练权重的从头训练,请解除前面层的冻结。

lanyufei commented 4 years ago

我加载预训练权重了,最终earlystopping的loss30多,我在yolov3上训练loss训练后17多,这正常吗

Augenstern-yzh commented 4 years ago

我训练也遇到这种问题,loss很高,还没有我用yolov3得到的loss值低,而且loss值降低的很慢,请问这什么原因呢?

robisen1 commented 4 years ago

I trained with my own data set, and loss felt high and converged slowly @lanyufei did you have to use a very small batch size when training on your own dataset? I had to use 4 even though I am using a RTX 2080 TI founders edition GPU with 11g's of ram. Doing the same on qqwweee's implementation, which this borrows from, I could do 16 when training from scratch. Any idea whats using up all the memory?

iliask97 commented 4 years ago

I trained with my own data set, and loss felt high and converged slowly @lanyufei did you have to use a very small batch size when training on your own dataset? I had to use 4 even though I am using a RTX 2080 TI founders edition GPU with 11g's of ram. Doing the same on qqwweee's implementation, which this borrows from, I could do 16 when training from scratch. Any idea whats using up all the memory?

qqwweee's implemantation is on yolov3 not v4. yolov4 is a way bigger network and requiers a lot more vram to run. In the github repository of the original yolo their is a section witch tells you what changes you need to do according to your gpu memory to train it faster