tensorflow / models

Models and examples built with TensorFlow
Other
76.97k stars 45.79k forks source link

question of hard_example_mining_step in loss #8958

Open cchenzhou opened 4 years ago

cchenzhou commented 4 years ago

Hi, I try to train the a new model for semantic segmentation by using our own dataset. At first, I use the add_softmax_cross_entropy_loss_for_each_scale, which didn't contain hard_example_mining_step, top_k_percent_pixels and loss_weight. The result is good, but I want to improve the performance of this model, thus using loss with hard_example_mining_step and top_k_percent_pixels. And I set these two parameters as 300k(all training steps is 600k) and 0.25, but the result is much worse(declined 20%) than the former loss. I'm courious about this bad result, and cannot figure out what's the problem of the number I set.

Could anyone help me?

ravikyram commented 4 years ago

@cchenzhou

Please, fill issue template..

Please, let us know which pretrained model you are using and share related code .Thanks!

cchenzhou commented 4 years ago

@ravikyram Thank you for your reply.

I use deeplab(https://github.com/tensorflow/models/tree/master/research/deeplab) train my model with resnet-v1-50 as pretrained model.

And I set these two parameters as 200k(all training steps is 600k) and 0.25. but the result is not good than before.

At frist, I have used add_softmax_cross_entropy_loss_for_each_scale, which didn't contain hard_example_mining_step. Under this circumstance, MIoU can achieve around 80%, but MIoU have been reduced by 20% when I use new add_softmax_cross_entropy_loss_for_each_scale that contains hard_example_mining_step.

System information

And my training script like below.

!/bin/bash

set -e

NUM_CLONES=2 GPUS=4,5 EVAL_GPU=0 FOLDER="samsung_segmentation" echo "${FOLDER}" XCEPTION='resnet_v1_50_beta_pop' DATASET='humanseg'

loss function choose

using_org_loss=True last_layers_contain_logits_only=False initialize_last_layer=True using_rotate=True

model structure choose

using_decoder=True aspp_with_separable_conv=False using_depth_channel=256 using_decoder_channel=256 using_rnn_depth_channel=512 using_psp=True using_multi_rate_in_re=4

parameters

TRAIN_SPLIT_STAGE1='train_14_person_71334_0521' TRAIN_CROP_SIZE_STAGE1=513 LEARING_RATE_STAGE1=0.01 Gradient_multiplier=1 OUTPUT_STRIDE_STAGE1=16 DECODEA_OUTPUT_STRIDE_STAGE1=2 TRAIN_BATCH_STAGE1=16 FINE_TUNE_BATCH_NORM_STAGE1=True NUM_ITERATIONS_STAGE1=600000 multi_grid1=1 multi_grid2=2 multi_grid3=4 TF_INITIAL_CHECKPOINT_STAGE1="/data/xiangyu.zhu/pingjun.li/1213/resnet_v1_50_beta/pretrain_human/model.ckpt"

TF_INITIAL_CHECKPOINT_STAGE1="/data/xiangyu.zhu/pingjun.li/1213/resnet_v1_50_beta/pretrain_human/model.ckpt"

TF_INITIAL_CHECKPOINT_STAGE1=False

cd ../"${FOLDER}" CURRENT_DIR=$(pwd) WORK_DIR="${CURRENT_DIR}"

cd "${CURRENT_DIR}"

EXP_FOLDER="../output_14_classes_new_fusion_0715/$0" EVAL_LOGDIR="${WORK_DIR}/${EXP_FOLDER}/eval" VIS_LOGDIR="${WORK_DIR}/${EXP_FOLDER}/vis/samsung_test_new" mkdir -p "${EVAL_LOGDIR}" mkdir -p "${VIS_LOGDIR}" cd "${WORK_DIR}" PASCAL_DATASET="/data/xiangyu.zhu/pingjun.li/samsung_ori/tfrecord/" LOGDIR_STAGE1='train_coarse_stage1_bn_l2sp' TRAIN_LOGDIR_STAGE1="${WORK_DIR}/${EXP_FOLDER}/train/${LOGDIR_STAGE1}" mkdir -p "${TRAIN_LOGDIR_STAGE1}"

export CUDA_VISIBLE_DEVICES="${GPUS}"

export CUDA_VISIBLE_DEVICES="${GPUS}"

python train_my_pretrained_ori.py \ --logtostderr \ --save_summaries_images=True \ --base_learning_rate="${LEARING_RATE_STAGE1}" \ --train_split="${TRAIN_SPLIT_STAGE1}" \ --model_variant="${XCEPTION}" \ --output_stride="${OUTPUT_STRIDE_STAGE1}" \ --train_crop_size=${TRAIN_CROP_SIZE_STAGE1} \ --train_crop_size=${TRAIN_CROP_SIZE_STAGE1} \ --train_batch_size="${TRAIN_BATCH_STAGE1}" \ --hard_example_mining_step='200000' \ --top_k_percent_pixels='0.25' \ --dataset="${DATASET}" \ --training_number_of_steps="${NUM_ITERATIONS_STAGE1}" \ --fine_tune_batch_norm="${FINE_TUNE_BATCH_NORM_STAGE1}" \ --tf_initial_checkpoint="${TF_INITIAL_CHECKPOINT_STAGE1}" \ --train_logdir="${TRAIN_LOGDIR_STAGE1}" \ --dataset_dir="${PASCAL_DATASET}" \ --num_clones="${NUM_CLONES}" \ --using_org_loss="${using_org_loss}" \ --decoder_output_stride="${DECODEA_OUTPUT_STRIDE_STAGE1}" \ --using_decoder="${using_decoder}" \ --eval_logdir="${TRAIN_LOGDIR_STAGE1}/eval" \ --eval_batch_size=1 \ --eval_crop_size=513 \ --eval_crop_size=513 \ --eval_output_stride="${OUTPUT_STRIDE_STAGE1}" \ --eval_scales=1.0 \ --eval_add_flipped_images=False \ --eval_split='samsung_test_new_positive' \ --using_eval_when_train=True \ --eval_how_many_steps=8000 \ --save_summaries_steps=500 \ --save_model_steps=2000 \ --last_layer_gradient_multiplier="${Gradient_multiplier}" \ --using_depth_channel="${using_depth_channel}" \ --aspp_with_separable_conv="${aspp_with_separable_conv}" \ --using_rnn_depth_channel="${using_rnn_depth_channel}" \ --last_layers_contain_logits_only="${last_layers_contain_logits_only}" \ --initialize_last_layer="${initialize_last_layer}" \ --max_resize_value=513 \ --min_resize_value=513 \ --using_psp="${using_psp}" \ --using_decoder_channel="${using_decoder_channel}" \ --using_rotate="${using_rotate}" \

cchenzhou commented 4 years ago

import six import tensorflow as tf import common from core import model_new_fuse as model from datasets import segmentation_dataset from utils import input_generator from utils import train_utils from utils import model_deploy import numpy as np from progressbar import * import cv2 import pdb

slim = tf.contrib.slim

prefetch_queue = slim.prefetch_queue

flags = tf.app.flags

FLAGS = flags.FLAGS

Settings for multi-GPUs/multi-replicas training.

flags.DEFINE_integer('num_clones', 1, 'Number of clones to deploy.')

flags.DEFINE_boolean('clone_on_cpu', False, 'Use CPUs to deploy clones.')

flags.DEFINE_integer('num_replicas', 1, 'Number of worker replicas.')

flags.DEFINE_integer('startup_delay_steps', 15, 'Number of training steps between replicas startup.')

flags.DEFINE_integer('num_ps_tasks', 0, 'The number of parameter servers. If the value is 0, then ' 'the parameters are handled locally by the worker.')

flags.DEFINE_string('master', '', 'BNS name of the tensorflow server')

flags.DEFINE_integer('task', 0, 'The task ID.')

Settings for logging.

flags.DEFINE_string('train_logdir', None, 'Where the checkpoint and logs are stored.')

flags.DEFINE_integer('log_steps', 10, 'Display logging information at every log_steps.')

flags.DEFINE_integer('save_interval_secs', 5, 'How often, in seconds, we save the model to disk.')

flags.DEFINE_integer('save_summaries_secs', 5, 'How often, in seconds, we compute the summaries.')

flags.DEFINE_boolean('save_summaries_images', False, 'Save sample inputs, labels, and semantic predictions as ' 'images to summary.') flags.DEFINE_string('data_location', None, 'where is the data location.')

Settings for training strategy.

flags.DEFINE_enum('learning_policy', 'poly', ['poly', 'step'], 'Learning rate policy for training.')

Use 0.007 when training on PASCAL augmented training set, train_aug. When

fine-tuning on PASCAL trainval set, use learning rate=0.0001.

flags.DEFINE_float('base_learning_rate', .0001, 'The base learning rate for model training.')

flags.DEFINE_float('learning_rate_decay_factor', 0.1, 'The rate to decay the base learning rate.')

flags.DEFINE_integer('learning_rate_decay_step', 2000, 'Decay the base learning rate at a fixed step.')

flags.DEFINE_float('learning_power', 0.9, 'The power value used in the poly learning policy.')

flags.DEFINE_integer('training_number_of_steps', 30000, 'The number of steps used for training')

flags.DEFINE_float('momentum', 0.9, 'The momentum value to use')

When fine_tune_batch_norm=True, use at least batch size larger than 12

(batch size more than 16 is better). Otherwise, one could use smaller batch

size and set fine_tune_batch_norm=False.

flags.DEFINE_integer('train_batch_size', 8, 'The number of images in each batch during training.')

flags.DEFINE_float('weight_decay', 0.00004, 'The value of the weight decay for training.')

flags.DEFINE_multi_integer('train_crop_size', [513, 513], 'Image crop size [height, width] during training.')

flags.DEFINE_float('last_layer_gradient_multiplier', 1.0, 'The gradient multiplier for last layers, which is used to ' 'boost the gradient of last layers if the value > 1.')

flags.DEFINE_boolean('upsample_logits', True, 'Upsample logits during training.')

Settings for fine-tuning the network.

flags.DEFINE_string('tf_initial_checkpoint', None, 'The initial checkpoint in tensorflow format.')

Set to False if one does not want to re-use the trained classifier weights.

flags.DEFINE_boolean('initialize_last_layer', True, 'Initialize the last layer.')

flags.DEFINE_boolean('last_layers_contain_logits_only', False, 'Only consider logits as last layers or not.')

flags.DEFINE_integer('slow_start_step', 0, 'Training model with small learning rate for few steps.')

flags.DEFINE_float('slow_start_learning_rate', 1e-4, 'Learning rate employed during slow start.')

Set to True if one wants to fine-tune the batch norm parameters in DeepLabv3.

Set to False and use small batch size to save GPU memory.

flags.DEFINE_boolean('fine_tune_batch_norm', True, 'Fine tune the batch norm parameters or not.')

flags.DEFINE_float('min_scale_factor', 0.5, 'Mininum scale factor for data augmentation.')

flags.DEFINE_float('max_scale_factor', 2., 'Maximum scale factor for data augmentation.')

flags.DEFINE_float('scale_factor_step_size', 0.25, 'Scale factor step size for data augmentation.')

For xception_65, use atrous_rates = [12, 24, 36] if output_stride = 8, or

rates = [6, 12, 18] if output_stride = 16. For mobilenet_v2, use None. Note

one could use different atrous_rates/output_stride during training/evaluation.

flags.DEFINE_multi_integer('atrous_rates', None, 'Atrous rates for atrous spatial pyramid pooling.')

flags.DEFINE_integer('output_stride', 16, 'The ratio of input to output spatial resolution.')

Hard example mining related flags.

flags.DEFINE_integer( 'hard_example_mining_step', 0, 'The training step in which exact hard example mining kicks off. Note we ' 'gradually reduce the mining percent to the specified ' 'top_k_percent_pixels. For example, if hard_example_mining_step=100K and ' 'top_k_percent_pixels=0.25, then mining percent will gradually reduce from ' '100% to 25% until 100K steps after which we only mine top 25% pixels.')

flags.DEFINE_float( 'top_k_percent_pixels', 1.0, 'The top k percent pixels (in terms of the loss values) used to compute ' 'loss during training. This is useful for hard pixel mining.')

Dataset settings.

flags.DEFINE_string('dataset', 'pascal_voc_seg', 'Name of the segmentation dataset.')

flags.DEFINE_string('train_split', 'train', 'Which split of the dataset to be used for training')

flags.DEFINE_string('dataset_dir', None, 'Where the dataset reside.')

My add

flags.DEFINE_integer('save_num',1000,'how many model to save') flags.DEFINE_boolean('lovasz_loss',False,'use lovasz_loss') flags.DEFINE_boolean('focals_loss',False,'Use focals loss') flags.DEFINE_float('facols_loss_gamma',2,'facols loss gamma') flags.DEFINE_float('facols_loss_alpha',1,'facols loss alpha') flags.DEFINE_boolean('using_l2sp',False,'use l2sp') flags.DEFINE_boolean('using_all_l2sp',False,'using_all_l2sp') flags.DEFINE_boolean('using_regularize_depthwise',False,'regularize_depthwise') flags.DEFINE_string('init_imagenet_model_dir', "/users2/ml/jlong.yuan/new/deeplab/datasets/cityscapes/init_models/xception_65/model.ckpt", 'imagenet model dir for l2sp') flags.DEFINE_boolean('using_save_memory',False,'using save memory') flags.DEFINE_boolean('using_org_loss',True,'using_org_loss') flags.DEFINE_boolean('using_edge_loss',False,'using_edge_loss') flags.DEFINE_boolean('using_aux_loss',False,'using_aux_loss') flags.DEFINE_boolean('using_different_pretrained_model',False,'using_different_pretrained_model') flags.DEFINE_string('tf_initial_checkpoint_1', None, 'tf_initial_checkpoint_1 for xception') flags.DEFINE_string('tf_initial_checkpoint_2', None, 'tf_initial_checkpoint_2 for xception') flags.DEFINE_boolean('using_shared_var',False,'using_shared_var')

eval

flags.DEFINE_string('eval_logdir', None, 'Where to write the event logs.') flags.DEFINE_integer('eval_batch_size', 1, 'The number of images in each batch during evaluation.') flags.DEFINE_integer('eval_interval_secs', 60 * 5, 'How often (in seconds) to run evaluation.') flags.DEFINE_multi_integer('eval_crop_size', [513, 513], 'Image crop size [height, width] for evaluation.') flags.DEFINE_multi_integer('eval_atrous_rates', None, 'Atrous rates for atrous spatial pyramid pooling.') flags.DEFINE_integer('eval_output_stride', 16, 'The ratio of input to output spatial resolution.') flags.DEFINE_multi_float('eval_scales', [1.0], 'The scales to resize images for evaluation.') flags.DEFINE_bool('eval_add_flipped_images', False, 'Add flipped images for evaluation or not.') flags.DEFINE_string('eval_split', 'val', 'Which split of the dataset used for evaluation') flags.DEFINE_bool('using_eval_when_train', True, 'using_eval_when_train') flags.DEFINE_integer('eval_how_many_steps', 1000, 'eval_how_many_steps') flags.DEFINE_integer('save_summaries_steps', 100, 'save_summaries_steps') flags.DEFINE_integer('save_model_steps', 100, 'save_model_steps') flags.DEFINE_bool('using_flip', True, 'using_flip') flags.DEFINE_bool('using_morph', False, 'using_morph') flags.DEFINE_bool('using_sigmoid_edge_loss', False, 'using_sigmoid_edge_loss') flags.DEFINE_bool('using_rotate', False, 'using_rotate') flags.DEFINE_bool('using_l2_loss', False, 'using_l2_loss')

def _build_deeplab(inputs_queue, outputs_to_num_classes, ignore_label): samples = inputs_queue.dequeue() samples[common.IMAGE] = tf.identity( samples[common.IMAGE], name=common.IMAGE) samples[common.LABEL] = tf.identity( samples[common.LABEL], name=common.LABEL)

model_options = common.ModelOptions(
    outputs_to_num_classes=outputs_to_num_classes,
    crop_size=FLAGS.train_crop_size,
    atrous_rates=FLAGS.atrous_rates,
    output_stride=FLAGS.output_stride)
outputs_to_scales_to_logits = model.multi_scale_logits(
    samples[common.IMAGE],
    model_options=model_options,
    image_pyramid=FLAGS.image_pyramid,
    weight_decay=FLAGS.weight_decay,
    is_training=True,
    fine_tune_batch_norm=FLAGS.fine_tune_batch_norm)
print('model_options is:', model_options)
print('outputs_to_scales_to_logits is:', outputs_to_scales_to_logits)
#pdb.set_trace()
output_type_dict = outputs_to_scales_to_logits[common.OUTPUT_TYPE]
output_type_dict[model.get_merged_logits_scope()] = tf.identity(
    output_type_dict[model.get_merged_logits_scope()],
    name=common.OUTPUT_TYPE)

for output, num_classes in six.iteritems(outputs_to_num_classes):
    if FLAGS.using_org_loss:
        train_utils.add_softmax_cross_entropy_loss_for_each_scale(
            outputs_to_scales_to_logits[output],
            samples[common.LABEL],
            num_classes,
            ignore_label,
            loss_weight=model_options.label_weights,
            upsample_logits=FLAGS.upsample_logits,
            hard_example_mining_step=FLAGS.hard_example_mining_step,
            top_k_percent_pixels=FLAGS.top_k_percent_pixels,
            scope=output)
return outputs_to_scales_to_logits

def main(unused_argv): tf.logging.set_verbosity(tf.logging.INFO)

# Set up deployment (i.e., multi-GPUs and/or multi-replicas).
config = model_deploy.DeploymentConfig(
    num_clones=FLAGS.num_clones,
    clone_on_cpu=FLAGS.clone_on_cpu,
    replica_id=FLAGS.task,
    num_replicas=FLAGS.num_replicas,
    num_ps_tasks=FLAGS.num_ps_tasks)

# Split the batch across GPUs.
assert FLAGS.train_batch_size % config.num_clones == 0, (
    'Training batch size not divisble by number of clones (GPUs).')

clone_batch_size = FLAGS.train_batch_size // config.num_clones

# Get dataset-dependent information.
dataset = segmentation_dataset.get_dataset(
    FLAGS.dataset, FLAGS.train_split, dataset_dir=FLAGS.dataset_dir)

tf.gfile.MakeDirs(FLAGS.train_logdir)
tf.logging.info('Training on %s set', FLAGS.train_split)

with tf.Graph().as_default() as graph:
    with tf.device(config.inputs_device()):
        samples = input_generator.get(
            dataset,
            FLAGS.train_crop_size,
            clone_batch_size,
            min_resize_value=FLAGS.min_resize_value,
            max_resize_value=FLAGS.max_resize_value,
            resize_factor=FLAGS.resize_factor,
            min_scale_factor=FLAGS.min_scale_factor,
            max_scale_factor=FLAGS.max_scale_factor,
            scale_factor_step_size=FLAGS.scale_factor_step_size,
            dataset_split=FLAGS.train_split,
            is_training=True,
            model_variant=FLAGS.model_variant)
        inputs_queue = prefetch_queue.prefetch_queue(
            samples, capacity=128 * config.num_clones)

    # Create the global step on the device storing the variables.
    with tf.device(config.variables_device()):
        global_step = tf.train.get_or_create_global_step()

        # Define the model and create clones.
        model_fn = _build_deeplab
        model_args = (inputs_queue, {
            common.OUTPUT_TYPE: dataset.num_classes
        }, dataset.ignore_label)
        clones = model_deploy.create_clones(config, model_fn, args=model_args)

        # Gather update_ops from the first clone. These contain, for example,
        # the updates for the batch_norm variables created by model_fn.
        first_clone_scope = config.clone_scope(0)
        update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS, first_clone_scope)

    # Gather initial summaries.
    summaries = set(tf.get_collection(tf.GraphKeys.SUMMARIES))

    # Add summaries for model variables.
    for model_var in slim.get_model_variables():
        summaries.add(tf.summary.histogram(model_var.op.name, model_var))

    # Add summaries for images, labels, semantic predictions
    if FLAGS.save_summaries_images:
        summary_image = graph.get_tensor_by_name(
            ('%s/%s:0' % (first_clone_scope, common.IMAGE)).strip('/'))
        summaries.add(
            tf.summary.image('samples/%s' % common.IMAGE, summary_image))

        first_clone_label = graph.get_tensor_by_name(
            ('%s/%s:0' % (first_clone_scope, common.LABEL)).strip('/'))
        # Scale up summary image pixel values for better visualization.
        pixel_scaling = max(1, 255 // dataset.num_classes)
        summary_label = tf.cast(first_clone_label * pixel_scaling, tf.uint8)
        summaries.add(
            tf.summary.image('samples/%s' % common.LABEL, summary_label))

        first_clone_output = graph.get_tensor_by_name(
            ('%s/%s:0' % (first_clone_scope, common.OUTPUT_TYPE)).strip('/'))
        if FLAGS.using_single_channel:
            predictions = first_clone_output
            summary_predictions = tf.cast(predictions * 255, tf.float32)
        else:
            predictions = tf.expand_dims(tf.argmax(first_clone_output, 3), -1)

            summary_predictions = tf.cast(predictions * pixel_scaling, tf.uint8)
        summaries.add(
            tf.summary.image(
                'samples/%s' % common.OUTPUT_TYPE, summary_predictions))

    #pdb.set_trace()
    # Add summaries for losses.
    for loss in tf.get_collection(tf.GraphKeys.LOSSES, first_clone_scope):
        summaries.add(tf.summary.scalar('losses/%s' % loss.op.name, loss))

    # Build the optimizer based on the device specification.
    with tf.device(config.optimizer_device()):
        learning_rate = train_utils.get_model_learning_rate(
            FLAGS.learning_policy, FLAGS.base_learning_rate,
            FLAGS.learning_rate_decay_step, FLAGS.learning_rate_decay_factor,
            FLAGS.training_number_of_steps, FLAGS.learning_power,
            FLAGS.slow_start_step, FLAGS.slow_start_learning_rate)
        optimizer = tf.train.MomentumOptimizer(learning_rate, FLAGS.momentum)
        summaries.add(tf.summary.scalar('learning_rate', learning_rate))

    startup_delay_steps = FLAGS.task * FLAGS.startup_delay_steps
    for variable in slim.get_model_variables():
        summaries.add(tf.summary.histogram(variable.op.name, variable))

    with tf.device(config.variables_device()):

        l2_regular=[]
        for v in tf.trainable_variables():
            if 'weights' in v.name:
                if 'depthwise' in v.name:
                    pass
                else:
                    l2_regular.append(v)

        for v in l2_regular:
            # print(v)
            tf.add_to_collection('my_l2',FLAGS.weight_decay * tf.nn.l2_loss(v))
        total_loss, grads_and_vars = model_deploy.optimize_clones( clones, optimizer, regularization_losses=tf.get_collection('my_l2'))

        total_loss = tf.check_numerics(total_loss, 'Loss is inf or nan.')
        summaries.add(tf.summary.scalar('total_loss', total_loss))

        # Modify the gradients for biases and last layer variables.
        last_layers = model.get_extra_layer_scopes(
            FLAGS.last_layers_contain_logits_only)
        grad_mult = train_utils.get_model_gradient_multipliers(
            last_layers, FLAGS.last_layer_gradient_multiplier)
        if grad_mult:
            grads_and_vars = slim.learning.multiply_gradients(
                grads_and_vars, grad_mult)

        # Create gradient update op.
        grad_updates = optimizer.apply_gradients(
            grads_and_vars, global_step=global_step)
        update_ops.append(grad_updates)
        update_op = tf.group(*update_ops)
        with tf.control_dependencies([update_op]):
            train_tensor = tf.identity(total_loss, name='train_op')

    # Add the summaries from the first clone. These contain the summaries
    # created by model_fn and either optimize_clones() or _gather_clone_loss().
    summaries |= set(
        tf.get_collection(tf.GraphKeys.SUMMARIES, first_clone_scope))

    # Merge all summaries together.
    summary_op = tf.summary.merge(list(summaries))

    # Soft placement allows placing on CPU ops without GPU implementation.
    session_config = tf.ConfigProto(
        allow_soft_placement=True, log_device_placement=False)
    session_config.gpu_options.allow_growth = True
    '''
    init_fn = train_utils.get_model_init_fn(
                FLAGS.train_logdir,
                FLAGS.tf_initial_checkpoint,
                FLAGS.initialize_last_layer,
                last_layers,
                ignore_missing_vars=True)
    '''
    assign_fn = slim.assign_from_checkpoint_fn(FLAGS.tf_initial_checkpoint, [x for x in tf.global_variables() if x.name.startswith('resnet')], ignore_missing_vars=True)
    summary_writer=tf.summary.FileWriter(FLAGS.train_logdir+'/summary',graph=tf.get_default_graph())

    tf.logging.info('Starting Session.')
    sess=tf.Session(config=session_config)
    saver=tf.train.Saver(max_to_keep=FLAGS.save_num)
    sess.run([tf.global_variables_initializer(), tf.local_variables_initializer()])

    print('**************using standard init function*******************')
    #if init_fn is not None:
    #    init_fn(sess)
    if assign_fn is not None:
        assign_fn(sess)
    else:
        slim.assign_from_checkpoint_fn(
            tf.train.latest_checkpoint(FLAGS.train_logdir),
            tf.global_variables(),
            ignore_missing_vars=False)(sess)

    tf.logging.info('Starting Queues.')
    coord=tf.train.Coordinator()
    threads = tf.train.start_queue_runners(coord=coord, sess=sess)

    files = open(FLAGS.train_logdir+'/summary/mIoU.txt','a')

    tf.logging.info('Saving Model in '+ FLAGS.train_logdir+'/model.ckpt')
    tf.logging.info('the datalog is ' + FLAGS.train_split)
    saver.save(sess, FLAGS.train_logdir+'/model.ckpt' ,global_step=0) 
    sess.run(global_step.initializer)
    for i in range(FLAGS.training_number_of_steps):

        start_time                                = time.time()
        o_total_loss, o_global_step, o_summary_op = sess.run([train_tensor, global_step, summary_op])
        time_elapsed                              = time.time() - start_time
        o_global_step = sess.run(global_step)
        if o_global_step > FLAGS.training_number_of_steps:
            tf.logging.info('Stopping Training.')
            break

        if o_global_step % FLAGS.save_model_steps ==0:
            tf.logging.info('Saving Model in '+ FLAGS.train_logdir+'/model.ckpt')
            saver.save(sess, FLAGS.train_logdir+'/model.ckpt' ,global_step=o_global_step)

        if (i+1) == FLAGS.training_number_of_steps:
            tf.logging.info('Saving Model in '+ FLAGS.train_logdir+'/model.ckpt')
            saver.save(sess, FLAGS.train_logdir+'/model.ckpt' ,global_step=o_global_step)
            tf.logging.info('Finished training! Saving model to disk.')

        if o_global_step % FLAGS.log_steps ==0:
            tf.logging.info('global step %d: loss = %.4f (%.3f sec/step)', o_global_step, o_total_loss, time_elapsed)

        if o_global_step % FLAGS.save_summaries_steps == 0:
            summary_writer.add_summary(o_summary_op, global_step=o_global_step)

    tf.logging.info('Saving Model in '+ FLAGS.train_logdir+'/last_model.ckpt')
    saver.save(sess, FLAGS.train_logdir+'/last_model.ckpt' ,global_step=o_global_step)
    tf.logging.info('Finished training! Saving model to disk.')

    coord.request_stop()
    coord.join(threads)
    files.close()

if name == 'main': flags.mark_flag_as_required('train_logdir') flags.mark_flag_as_required('tf_initial_checkpoint') flags.mark_flag_as_required('dataset_dir') tf.app.run()

aquariusjay commented 4 years ago

Hi cchenzhou,

Thanks for the question. You could also try using hard_example_mining_step = 0, and tune top_k_percent_pixels slightly (say 0.1, 0.15, 0.2, 0.25, and so on). Note that this hard pixel mining strategy heavily depends on the dataset annotation quality. If the annotation quality is not good enough, the hard pixels may all belong to the wrong annotations (e.g., the missing annotations) and thus hurts the training. Hope that helps.

Cheers,

cchenzhou commented 4 years ago

@aquariusjay

Thanks for your reply.

I'm a little curious about the meaning of hard_example_mining_step=0 and top_k_percent_pixels = 0.1, 0.15, 0.2, 0.25. Because in the train_utils. py, hard_example_mining_step is related to the top_k_percent_pixels.

For example, if hard_example_mining_step = 100K and top_k_percent_pixels = 0.25, then mining percent will gradually reduce from 100% to 25% until 100K steps after which we only mine top 25% pixel.

Will top_k_percent_pixels can work if the hard_example_mining_step=0 and top_k_percent_pixels=0.1. I'm confused. When our model would mine top 10% pixel?

cchenzhou commented 4 years ago

Hi aquariusjay,

I try to use hard_example_mining_step = 0, and tune top_k_percent_pixels=0.1, but the loss function cannot converge.

Actually, the loss function is going to go down very quickly at the beginning. Could you tell me how to apply it? Thank you.

I0730 09:40:54.456578 140454491997952 train_my_pretrained_ori.py:472] global step 460: loss = 2.4459 (0.912 sec/step) I0730 09:41:04.359597 140454491997952 train_my_pretrained_ori.py:472] global step 470: loss = 2.4874 (1.077 sec/step) I0730 09:41:13.771185 140454491997952 train_my_pretrained_ori.py:472] global step 480: loss = 2.5470 (0.949 sec/step) I0730 09:41:23.672148 140454491997952 train_my_pretrained_ori.py:472] global step 490: loss = 2.5056 (1.135 sec/step) I0730 09:41:33.911237 140454491997952 train_my_pretrained_ori.py:472] global step 500: loss = 2.3727 (0.872 sec/step) I0730 09:41:44.241558 140454491997952 train_my_pretrained_ori.py:472] global step 510: loss = 3.0749 (1.057 sec/step) I0730 09:41:54.300216 140454491997952 train_my_pretrained_ori.py:472] global step 520: loss = 2.5035 (0.963 sec/step) I0730 09:42:04.581771 140454491997952 train_my_pretrained_ori.py:472] global step 530: loss = 2.3605 (1.240 sec/step) I0730 09:42:15.084317 140454491997952 train_my_pretrained_ori.py:472] global step 540: loss = 2.3767 (1.140 sec/step) I0730 09:42:25.308255 140454491997952 train_my_pretrained_ori.py:472] global step 550: loss = 2.7524 (0.965 sec/step) I0730 09:42:35.728561 140454491997952 train_my_pretrained_ori.py:472] global step 560: loss = 2.4371 (0.940 sec/step) I0730 09:42:46.015891 140454491997952 train_my_pretrained_ori.py:472] global step 570: loss = 2.5434 (1.061 sec/step) I0730 09:42:56.563245 140454491997952 train_my_pretrained_ori.py:472] global step 580: loss = 2.7387 (1.028 sec/step) I0730 09:43:06.237757 140454491997952 train_my_pretrained_ori.py:472] global step 590: loss = 2.5479 (0.946 sec/step) I0730 09:43:16.966474 140454491997952 train_my_pretrained_ori.py:472] global step 600: loss = 2.2968 (0.855 sec/step) I0730 09:43:27.513592 140454491997952 train_my_pretrained_ori.py:472] global step 610: loss = 2.4310 (0.989 sec/step) I0730 09:43:37.385978 140454491997952 train_my_pretrained_ori.py:472] global step 620: loss = 3.1768 (1.015 sec/step) I0730 09:43:46.980446 140454491997952 train_my_pretrained_ori.py:472] global step 630: loss = 2.5306 (1.034 sec/step) I0730 09:43:57.254027 140454491997952 train_my_pretrained_ori.py:472] global step 640: loss = 2.6938 (0.971 sec/step) I0730 09:44:07.569460 140454491997952 train_my_pretrained_ori.py:472] global step 650: loss = 2.3930 (1.202 sec/step) I0730 09:44:18.076265 140454491997952 train_my_pretrained_ori.py:472] global step 660: loss = 2.6143 (0.993 sec/step) I0730 09:44:28.173717 140454491997952 train_my_pretrained_ori.py:472] global step 670: loss = 2.3745 (1.008 sec/step)