hunglc007 / tensorflow-yolov4-tflite

YOLOv4, YOLOv4-tiny, YOLOv3, YOLOv3-tiny Implemented in Tensorflow 2.0, Android. Convert YOLO v4 .weights tensorflow, tensorrt and tflite
https://github.com/hunglc007/tensorflow-yolov4-tflite
MIT License
2.23k stars 1.24k forks source link

Unused kernel regularizers during training #369

Open klekass opened 3 years ago

klekass commented 3 years ago

In the model l2 kernel regularizers are defined

conv = tf.keras.layers.Conv2D(filters=filters_shape[-1], 
        kernel_size = filters_shape[0], 
        strides=strides, padding=padding,
        use_bias=not bn, 
        kernel_regularizer=tf.keras.regularizers.l2(0.0005),
        kernel_initializer=tf.random_normal_initializer(stddev=0.01),
        bias_initializer=tf.constant_initializer(0.))(input_layer)

However, during training, loss is computed manually using gradient tape (instead of using keras' model.fit() function) with the following three loss values:

total_loss = giou_loss + conf_loss + prob_loss

Is it possible that we are missing regularizer loss here? I tested it by setting kernel_regularizer=None which resulted in the exact same total loss. I suggest manually adding the regularizer loss using:

def regularizer_loss(model):
    """Retrieve kernel regularizer loss from layers with kernel regularizer"""
    loss = 0
    for layer in model.layers:
        if hasattr(layer, "kernel_regularizer") and layer.kernel_regularizer:
            loss += layer.kernel_regularizer(layer.kernel)
    return loss

total_loss = giou_loss + conf_loss + prob_loss + regularizer_loss(model)