cfzd / Ultra-Fast-Lane-Detection

Ultra Fast Structure-aware Deep Lane Detection (ECCV 2020)
MIT License
1.82k stars 493 forks source link

Fine-tuning model #53

Closed AlexKaravaev closed 4 years ago

AlexKaravaev commented 4 years ago

I try to fine tune on culane data(just to check), however it's seems like the model, that I am using for fine-tune is really random weights, because first epochs of learning loss is really big and performance is really poor.

Config

# DATA
dataset='CULane'
data_root='/home/robot/Downloads/driver_193_90frame-002/driver_193_90frame/06042010_0511.MP4'

# TRAIN
epoch = 80
batch_size = 8
optimizer = 'SGD'  #['SGD','Adam']
learning_rate = 0.1
weight_decay = 1e-4
momentum = 0.9

scheduler = 'multi' #['multi', 'cos']
steps = [25,38]
gamma  = 0.1
warmup = 'linear'
warmup_iters = 695

# NETWORK
use_aux = False
griding_num = 200
backbone = '18'

# LOSS
sim_loss_w = 0.0
shp_loss_w = 0.0

# EXP
note = ''

log_path = './logs/'

# FINETUNE or RESUME MODEL PATH
finetune = '/home/robot/dev/DeepRC/weights/culane_18.pth'
resume = None

# TEST
test_model = None
test_work_dir = None

Also this is strange, I do not know, why this is happening, but .pth files from train.py locally have size 356 MB, and provided culane_18.pth is 178 MB. Why is it down by factor of 2?

cfzd commented 4 years ago

@AlexKaravaev The finetune is actually only taking the weights of backbone into consideration. The weights of classifer is still random initalized. So the loss might be large. If you want to finetune the model while including the weights of classifier, you can change the code in here https://github.com/cfzd/Ultra-Fast-Lane-Detection/blob/dadb937b01ea58428172eaa479c528c187314cdc/train.py#L127-L128 to if 'model' in k or 'cls' in k:

For the size of model: The saved model files include weights of backbone, classification branch, auxiliary segmentation branch and optimizer. culane_18.pth removed the weights of auxiliary segmentation branch and optimizer, so it is smaller.

AlexKaravaev commented 4 years ago

Thank you! It works now even on my dataset, results are impressive. Cheers for the work, you've done!