i used a subset of ade20k images with their colored masks to train the network
after tuning over the dataset i found so poor accuracy
i don't know the reason of such result, i adjusted some lines from train.py such as
is_training=False #"Whether to updates the running means and variances during the training."
not_restore_last=False #true to train with different no. of classes #"Whether to not restore last (FC) layers."
random_mirror=False #"Whether to randomly mirror the inputs during the training."
random_scale=False #"Whether to randomly scale the inputs during the training."
update_mean_var= False #"whether to get update_op from tf.Graphic_Keys"
train_beta_gamma=False #"whether to train beta & gamma in bn layer"
BATCH_SIZE = 1 #"Number of images sent to the network in one step."
DATA_DIRECTORY = 'D:/mobileNetPSPNet/dataset/indoor/' #"Path to the directory containing the dataset."
DATA_LIST_PATH = './list/ade20k_train_indoor_list.txt' #"Path to the file listing the images in the dataset."
IGNORE_LABEL = 0
INPUT_SIZE = '473,473' #"Comma-separated string with height and width of images."
LEARNING_RATE = 1e-3 #"Base learning rate for training with polynomial decay."
MOMENTUM = 0.9 #"Momentum component of the optimiser."
NUM_CLASSES = 150 #"Number of classes to predict (including background)."
NUM_STEPS = 7522 #"Number of training steps."
POWER = 0.9 #"Decay parameter to compute the learning rate."
RANDOM_SEED = 1234 #"Random seed to have reproducible results."
WEIGHT_DECAY = 0.0001 #"Regularisation parameter for L2-loss."
RESTORE_FROM = './model/ade20k/x/' #"Where restore model parameters from."
SNAPSHOT_DIR = './model/ade20k/x/' #"Where to save snapshots of the model."
SAVE_NUM_IMAGES = 4 #"How many images to save."
SAVE_PRED_EVERY = 50 #"Save summaries and checkpoint every often."
tf.reset_default_graph()
################
restore_var = [v for v in tf.global_variables() if 'conv6' not in v.name]
##############
opt_conv = tf.train.MomentumOptimizer(learning_rate, MOMENTUM) #multiply by zero to make no update in weights for the first layers in the network
but the segmented image done by the downloaded model.ckpt
i used a subset of ade20k images with their colored masks to train the network after tuning over the dataset i found so poor accuracy
i don't know the reason of such result, i adjusted some lines from train.py such as
but the segmented image done by the downloaded model.ckpt