faustomilletari / CFCM-2D

This repository contains a publicly available version of CFCM: segmentation via Coarse to Fine Context Memory, a paper accepted at MICCAI 2018 for presentation.
33 stars 6 forks source link

can you offer the test demon script? #4

Open Lvhhhh opened 6 years ago

Lvhhhh commented 6 years ago

i have trained the network and get the model. but i dont know how to visualize the result? i obtain the csv_results.csv and folder(train and valid).

faustomilletari commented 6 years ago

You can use the tensorboard. Point it to the directory where you save the models. There should be summaries there.

Fausto Milletarì Sent from my iPhone

On 20. Jun 2018, at 00:14, Lvhhhh notifications@github.com wrote:

i have trained the network and get the model. but i dont know how to visualize the result? i obtain the csv_results.csv and folder(train and valid).

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

Lvhhhh commented 6 years ago

maybe i do not make it clear. if i have the test video. how to show the result using the model i trained

Lvhhhh commented 6 years ago

now you have just offer the train code . can you offer the test code for the other test video ? i want to see the result of the code

tmathai commented 5 years ago

@Lvhhhh Use this script to load the model, and run it on new datasets:

            with tf.Session() as sess:

                    meta_graph_def = tf.saved_model.loader.load(
                                           sess,
                                           [tf.saved_model.tag_constants.SERVING],
                                           path_to_model
                                       )
                    print('\n')

                    signature = meta_graph_def.signature_def
                    signature_key = 'prediction'  #tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY
                    input_key = 'images'
                    training_flag_key = 'is_training'
                    output_key = 'sigmoid'

                    x_tensor_name = signature[signature_key].inputs[input_key].name
                    print('x_tensor_name', x_tensor_name)

                    training_flag_tensor_name = signature[signature_key].inputs[training_flag_key].name
                    print('training_flag_tensor_name', training_flag_tensor_name)

                    y_tensor_name = signature[signature_key].outputs[output_key].name
                    print('y_tensor_name', y_tensor_name)

                    x_inp = sess.graph.get_tensor_by_name(x_tensor_name)

                    tflag_op = sess.graph.get_tensor_by_name(training_flag_tensor_name)

                    y_op = sess.graph.get_tensor_by_name(y_tensor_name)

                    x = cv2.imread(__full_path_wExt_img_fn__, cv2.IMREAD_UNCHANGED)

                    #[height, width]
                    print(x.shape)

                    if len(x.shape) != 2:

                        # not grayscale image
                        if x.shape[2] > 1:

                            x = cv2.cvtColor(x, cv2.COLOR_BGR2GRAY)

                    y = copy.deepcopy(x) / 255.0

                    y = np.reshape(y, newshape=(x.shape[0], x.shape[1], 1))

                    input_list = []

                    input_list.append(y)

                    input_list = np.asarray(input_list).astype(np.float32)

                    print(input_list.shape)

                    output = sess.run(y_op, {x_inp: input_list, tflag_op: False})
                    output = np.asarray(output).astype(np.float32)

                    print(output.shape)
                    print(output.dtype)

                    fo_channels = output.shape[3]

                    finalOut = output[0,:,:,0]
                    plt.figure()
                    plt.imshow(np.asarray(x), cmap = 'gray')

                    plt.figure()
                    plt.imshow(finalOut)

                    if fo_channels > 1:

                        plt.figure()
                        plt.imshow(np.asarray(output[0,:,:,1] ))

                    plt.show()