GitBoSun / roomnet

Tensorflow implementation of room layout paper: Roomnet: End-to-end Room Layout Estimation.
101 stars 37 forks source link

Predictions reproducibility #10

Open alex-kravets opened 5 years ago

alex-kravets commented 5 years ago

Hi! I have a problem with reproducibility of the predictions on inference. Below I attached the code I use to make predictions on samples from the file you provided: sample.npz.

I made one more function in addition to yours train and test for inference:

def inference(args):
    out_path = Path(args.out_path)

    outdir = out_path / 'inference'
    model_dir = out_path / 'model'

    if not outdir.exists():
        os.makedirs(outdir)

    config = tf.ConfigProto()
    config.gpu_options.allow_growth = True
    config.allow_soft_placement = True

    sess = tf.Session(config=config)
    device = '/gpu:0'
    if args.gpu == 1:
        device = '/gpu:1'
    with tf.device(device):
        if args.net == 'vanilla':
            net = RoomnetVanilla()
        if args.net == 'rcnn':
            net = RcnnNet()
        net.build_model()

        i = 0

        net.restore_model(sess, model_dir)
        print('restored')

        start_time = time.time()

        npz = np.load('sample.npz')
        print('sample.npz is now loaded')

        im_in, lay_gt, label_gt, names = npz['im'], npz['gt_lay'], npz['gt_label'], npz['names']
        net.set_feed(im_in, lay_gt, label_gt, i)

        pred_class, pred_lay = net.run_result(sess)
        c_out = np.argmax(pred_class, axis=1)
        c_gt = np.argmax(label_gt, axis=1)
        acc = np.mean(np.array(np.equal(c_out, c_gt), np.float32))

        for j in range(batch_size):
            img = im_in[j]
            outim = get_im(img, pred_lay[j], c_out[j], str(j))
            outim2 = get_im(img, lay_gt[j], c_gt[j], str(j))
            outpath = outdir / str(i)

            if not outdir.exists():
                os.makedirs(outpath)

            f, ax = plt.subplots(1, 2)
            ax = iter(ax.flatten())
            plt.sca(next(ax))
            plt.imshow(outim)
            plt.title(f'predicted {j}')
            plt.xlabel(f'class: {c_out[j]}')
            plt.sca(next(ax))
            plt.imshow(outim2)
            plt.title(f'ground truth')
            plt.xlabel(f'class: {c_gt[j]}')
            plt.show()

        print('[step: %d] [time: %s] [acc: %s]' % (i, time.time() - start_time, acc))
        net.print_loss_acc(sess)

and one more change in main:

    if not args.train == -1:
        train(args)
    if not args.test == -1:
        test(args)
    if not args.inference == -1:
        inference(args)

Then I noticed that after each time I run inference i get different results. For example, couple of predictions for image with index 11 in sample.npz: image image image

P.S. I used the following command to run the inference: python main.py --inference 0 --net vanilla --out_path pretrained_model --gpu 1.

So, the question is how to make results on inference reproducible?

alex-kravets commented 5 years ago

Finally I found one parameter and replaced net.set_feed(im_in, lay_gt, label_gt, i) with net.set_feed(im_in, lay_gt, label_gt, i, is_training=False)

Now results are stable from run to run, but the quality of predictions is very low. So I think that there is still a problem with network. Is there something I do wrong with your code? Maybe I need to add some preprocessing steps before feeding images into net, or something like this? Do you have any idea?

BTW: can you please provide sample names from the images that are shown on the repo page (in Readme.md)? It would be nice to have those both for vanilla and rcnn architectures. Thanks

lakshmankanakala commented 4 years ago

hi @alex-kravets , can you please tell me how to run inference for single own image. i am not able to understand from readme file. need to run on images. there are 'sample.npz' and *.mat files. can we create from our own data.

Thanks.