warmspringwinds / tf-image-segmentation

Image Segmentation framework based on Tensorflow and TF-Slim library
MIT License
549 stars 188 forks source link

Results very poor #31

Open rnunziata opened 6 years ago

rnunziata commented 6 years ago

My code runs and display two images but the segmentation is just very very poor similar to issue #24, can you see if I am calling the routines incorrectly.

image_batch_tensor = tf.expand_dims(image_tensor, axis=0)

with tf.Session() as sess:

    FCN_8s = adapt_network_for_any_size_input(FCN_8s, 32)

    pred, _ = FCN_8s(image_batch_tensor=image_batch_tensor,
                                          number_of_classes=number_of_classes,
                                          is_training=False)
    saver = tf.train.Saver()
    initializer = tf.global_variables_initializer()
    sess.run(initializer)
    saver = tf.train.import_meta_graph('./tf_image_segmentation/models/fcn_8s_checkpoint/model_fcn8s_final.ckpt.meta')
    saver.restore(sess, './tf_image_segmentation/models/fcn_8s_checkpoint/model_fcn8s_final.ckpt')    

    image_np, pred_np = sess.run([image_tensor, pred], feed_dict=feed_dict_to_use)

    io.imshow(image_np)
    io.show()

    io.imshow(pred_np.squeeze())
    io.show()
fastlater commented 6 years ago

@rnunziata what is your input image? Can you upload your image, so I can test it with my code too? Depending on the complexity of the image, the results may vary.

rnunziata commented 6 years ago

Image: living-room

import numpy as np
import skimage.io as io
import os, sys
from PIL import Image
import cv2

sys.path.append("./tf-image-segmentation/")
sys.path.append("./models/slim/")

os.environ["CUDA_VISIBLE_DEVICES"] = '1'

slim = tf.contrib.slim

from tf_image_segmentation.models.fcn_16s import FCN_16s
from tf_image_segmentation.utils.inference import adapt_network_for_any_size_input

number_of_classes = 21

image = cv2.imread("./Living-Room.jpg") #dim = (415, 750, 3)
image_batch_tensor = tf.expand_dims(image, axis=0)
image = np.reshape(image,(1, 415, 750, 3))
print (np.shape(image))
print (image_batch_tensor.shape)

with tf.Session() as sess:

    FCN_16s = adapt_network_for_any_size_input(FCN_16s, 32)

    pred, _ = FCN_16s(image_batch_tensor=image_batch_tensor,
                                          number_of_classes=number_of_classes,
                                          is_training=False)
    saver = tf.train.Saver()
    initializer = tf.global_variables_initializer()
    sess.run(initializer)
    saver = tf.train.import_meta_graph('./fcn_16s_checkpoint/model_fcn16s_final.ckpt.meta') # placed locally
    saver.restore(sess, './fcn_16s_checkpoint/model_fcn16s_final.ckpt')    

    image_np, pred_np = sess.run([image_batch_tensor, pred], feed_dict={image_batch_tensor:image})
    print (np.shape(image_np))  
    print (np.shape(pred_np))

    io.imshow(image_np.squeeze())
    io.show()
fastlater commented 6 years ago

figure_1

This what I got. As you said, the result is very poor.

However, as you can see from the demo http://warmspringwinds.github.io/tensorflow/tf-slim/2017/01/23/fully-convolutional-networks-(fcns)-for-image-segmentation/ This code suppose to work well for simple cases. The code is a simple implementation of fcn and for me, it is good to understand the concepts of the algorithm.

caxton commented 6 years ago

I got a better result by removing this line: saver = tf.train.import_meta_graph('./fcn_16s_checkpoint/model_fcn16s_final.ckpt.meta') # placed locally