JiahuiYu / generative_inpainting

DeepFill v1/v2 with Contextual Attention and Gated Convolution, CVPR 2018, and ICCV 2019 Oral
http://jiahuiyu.com/deepfill/
Other
3.24k stars 782 forks source link

AttributeError: 'Tensor' object has no attribute 'guided' #410

Closed Facial-Micro-ExpressionGC closed 4 years ago

Facial-Micro-ExpressionGC commented 4 years ago

Hi JiahuiYu , I am currently following your paper and your inpainting algorithm is great. I have been trying to use the test method and I am getting this error "AttributeError: 'Tensor' object has no attribute 'guided' when testing using this code you posted. Please i need your help as i am trying to test a folder of images.

import argparse

import os import cv2 import numpy as np import tensorflow as tf import neuralgym as ng import time

from inpaint_model import InpaintCAModel

def dir_path(string): if os.path.isdir(string): return string else: raise NotADirectoryError(string)

parser = argparse.ArgumentParser() parser.add_argument('--image', default='', type=str, help='The filename of image to be completed.') parser.add_argument('--mask', default='', type=str, help='The filename of mask, value 255 indicates mask.') parser.add_argument('--image_width', default='256', type=str, help='Image Width.') parser.add_argument('--image_height', default='256', type=str, help='Image Height.') parser.add_argument('--out', default='output.png', type=str, help='Where to write output.') parser.add_argument('--checkpoint_dir', default='', type=str, help='The directory of tensorflow checkpoint.')

if name == "main": FLAGS = ng.Config('inpaint.yml')

ng.get_gpus(1)

args, unknown = parser.parse_known_args()
sess_config = tf.ConfigProto()                                                                                                                                                                                                            
sess_config.gpu_options.allow_growth = True                                                                                                                                                                                               
sess = tf.Session(config=sess_config)                                                                                                                                                                                                     

model = InpaintCAModel()                                                                                                                                                                                                                  
input_image_ph = tf.placeholder(                                                                                                                                                                                                          
    tf.float32, shape=(1, args.image_height, args.image_width, 3))                                                                                                                                                                      
output = model.build_server_graph(input_image_ph,1)                                                                                                                                                                                         
output = (output + 1.) * 127.5                                                                                                                                                                                                            
output = tf.reverse(output, [-1])                                                                                                                                                                                                         
output = tf.saturate_cast(output, tf.uint8)                                                                                                                                                                                               
vars_list = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES)                                                                                                                                                                              
assign_ops = []                                                                                                                                                                                                                           
for var in vars_list:                                                                                                                                                                                                                     
    vname = var.name                                                                                                                                                                                                                      
    from_name = vname                                                                                                                                                                                                                     
    var_value = tf.contrib.framework.load_variable(                                                                                                                                                                                       
        args.checkpoint_dir, from_name)                                                                                                                                                                                                   
    assign_ops.append(tf.assign(var, var_value))                                                                                                                                                                                          
sess.run(assign_ops)                                                                                                                                                                                                                      
print('Model loaded.')                                                                                                                                                                                                                    

with open(args.flist, 'r') as f:                                                                                                                                                                                                          
    lines = f.read().splitlines()                                                                                                                                                                                                         
t = time.time()                                                                                                                                                                                                                           
for line in lines:                                                                                                                                                                                                                                                                                                                                                                                                                                     
    image, mask, out = line.split()                                                                                                                                                                                                       
    base = os.path.basename(mask)                                                                                                                                                                                                         

    image = cv2.imread(image)                                                                                                                                                                                                             
    mask = cv2.imread(mask)                                                                                                                                                                                                               
    image = cv2.resize(image, (args.image_width, args.image_height))                                                                                                                                                                      
    mask = cv2.resize(mask, (args.image_width, args.image_height))                                                                                                                                                                        
    # cv2.imwrite(out, image*(1-mask/255.) + mask)                                                                                                                                                                                        
    # # continue                                                                                                                                                                                                                          
    # image = np.zeros((128, 256, 3))                                                                                                                                                                                                     
    # mask = np.zeros((128, 256, 3))                                                                                                                                                                                                      

    assert image.shape == mask.shape                                                                                                                                                                                                      

    h, w, _ = image.shape                                                                                                                                                                                                                 
    grid = 4                                                                                                                                                                                                                              
    image = image[:h//grid*grid, :w//grid*grid, :]                                                                                                                                                                                        
    mask = mask[:h//grid*grid, :w//grid*grid, :]                                                                                                                                                                                          
    print('Shape of image: {}'.format(image.shape))                                                                                                                                                                                       

    image = np.expand_dims(image, 0)                                                                                                                                                                                                      
    mask = np.expand_dims(mask, 0)                                                                                                                                                                                                        
    input_image = np.concatenate([image, mask], axis=2)                                                                                                                                                                                   

    # load pretrained model                                                                                                                                                                                                               
    result = sess.run(output, feed_dict={input_image_ph: input_image})                                                                                                                                                                    
    print('Processed: {}'.format(out))   
    #cv2.imwrite(args.output+str(i)+'.jpg', result[0][:, :, ::-1])                                                                                                                                                                                                 
    cv2.imwrite(out, result[0][:, :, ::-1])                                                                                                                                                                                               

print('Time total: {}'.format(time.time() - t)) 
JiahuiYu commented 4 years ago

It seems you are using the wrong code (deepfill v1 vs. deepfill v2?). Please follow our instructions and test code exactly. It should run without errors.

Facial-Micro-ExpressionGC commented 4 years ago

Hi Joaquin, Thanks for your quick response. I am using the Pretained model from the link and the test.py says Release v2.0.0. I cloned the repository from github. Which version is which now?

Get Outlook for iOShttps://aka.ms/o0ukef


From: JiahuiYu notifications@github.com Sent: Friday, February 28, 2020 8:41:25 PM To: JiahuiYu/generative_inpainting generative_inpainting@noreply.github.com Cc: Jireh Jam JIREH.JAM@stu.mmu.ac.uk; Author author@noreply.github.com Subject: Re: [JiahuiYu/generative_inpainting] AttributeError: 'Tensor' object has no attribute 'guided' (#410)

It seems you are using the wrong code (deepfill v1 vs. deepfill v2?). Please follow our instructions and test code exactly. It should run without errors.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/JiahuiYu/generative_inpainting/issues/410?email_source=notifications&email_token=AKUKH7HOPHNC2G6KA3RIOIDRFFZHLA5CNFSM4K53ALYKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOENKCXWY#issuecomment-592718811, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AKUKH7DYPAYWNRAJXRAISULRFFZHLANCNFSM4K53ALYA.

Facial-Micro-ExpressionGC commented 4 years ago

Hi JiahuiYu, Sorry for bothering you. When running the test script you have here, I get the following error. Traceback (most recent call last): File "test.py", line 66, in output = model.build_server_graph(FLAGS, input_image) File "/home/staff/jireh/Year1-Ph.D/RD2/generative_inpainting-master/inpaint_model.py", line 293, in build_server_graph xin, masks, reuse=reuse, training=is_training) File "/home/staff/jireh/Year1-Ph.D/RD2/generative_inpainting-master/inpaint_model.py", line 49, in build_inpaint_net x = gen_conv(x, cnum, 5, 1, name='conv1') File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 182, in func_with_args return func(*args, current_args) File "/home/staff/jireh/Year1-Ph.D/RD2/generative_inpainting-master/inpaint_ops.py", line 48, in gen_conv activation=None, padding=padding, name=name) File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/python/layers/convolutional.py", line 417, in conv2d return layer.apply(inputs) File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 817, in apply return self.call(inputs, *args, *kwargs) File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/python/layers/base.py", line 374, in call outputs = super(Layer, self).call(inputs, args, kwargs) File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 746, in call self.build(input_shapes) File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/python/keras/layers/convolutional.py", line 165, in build dtype=self.dtype) File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/python/layers/base.py", line 288, in add_weight getter=vs.get_variable) File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 609, in add_weight aggregation=aggregation) File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/python/training/checkpointable/base.py", line 639, in _add_variable_with_custom_getter **kwargs_for_getter) File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 1487, in get_variable aggregation=aggregation) File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 1237, in get_variable aggregation=aggregation) File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 540, in get_variable aggregation=aggregation) File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 492, in _true_getter aggregation=aggregation) File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 861, in _get_single_variable name, "".join(traceback.format_list(tb)))) ValueError: Variable inpaint_net/conv1/kernel already exists, disallowed. Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope? Originally defined at:

File "/home/staff/jireh/Year1-Ph.D/RD2/generative_inpainting-master/inpaint_ops.py", line 48, in gen_conv activation=None, padding=padding, name=name) File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 182, in func_with_args return func(*args, **current_args) File "/home/staff/jireh/Year1-Ph.D/RD2/generative_inpainting-master/inpaint_model.py", line 49, in build_inpaint_net x = gen_conv(x, cnum, 5, 1, name='conv1')

Below is my code: import argparse

import os import cv2 import numpy as np import tensorflow as tf import neuralgym as ng

from inpaint_model import InpaintCAModel

def dir_path(string): if os.path.isdir(string): return string else: raise NotADirectoryError(string)

parser = argparse.ArgumentParser() parser.add_argument('--image', default='', type=str, help='The filename of image to be completed.') parser.add_argument('--mask', default='', type=str, help='The filename of mask, value 255 indicates mask.') parser.add_argument('--output', default='output.png', type=str, help='Where to write output.') parser.add_argument('--checkpoint_dir', default='', type=str, help='The directory of tensorflow checkpoint.')

if name == "main": FLAGS = ng.Config('inpaint.yml')

ng.get_gpus(1)

args, unknown = parser.parse_known_args()
#args = vars(parser.parse_args())
model = InpaintCAModel()
mask_paths = os.listdir(args.mask)
print(len(mask_paths))
img_paths = os.listdir(args.image)
for i in range(len(mask_paths)):
    image = cv2.imread(args.image+img_paths[i])
    #image = cv2.imread(args["image"])
    print("Hi there {}, it's nice to meet you!".format(args.image))
    #print(image.shape[1])
    #mask = cv2.imread(args["mask"])
    mask = cv2.imread(args.mask+img_paths[i])

    #mask = cv2.cvtColor(mask, cv2.COLOR_GRAY2RGB)
    #mask = mask[:,:,0:3]
    print(mask.shape)
    # mask = cv2.resize(mask, (0,0), fx=0.5, fy=0.5)

    assert image.shape == mask.shape

    h, w, _ = image.shape
    grid = 8
    image = image[:h//grid*grid, :w//grid*grid, :]
    mask = mask[:h//grid*grid, :w//grid*grid, :]
    print('Shape of image: {}'.format(image.shape))

    image = np.expand_dims(image, 0)
    mask = np.expand_dims(mask, 0)
    input_image = np.concatenate([image, mask], axis=2)

    sess_config = tf.ConfigProto()
    sess_config.gpu_options.allow_growth = True
    with tf.Session(config=sess_config) as sess:
        input_image = tf.constant(input_image, dtype=tf.float32)
        output = model.build_server_graph(FLAGS, input_image)
        output = (output + 1.) * 127.5
        output = tf.reverse(output, [-1])
        output = tf.saturate_cast(output, tf.uint8)
        # load pretrained model
        vars_list = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES)
        assign_ops = []
        for var in vars_list:
            vname = var.name
            from_name = vname
            var_value = tf.contrib.framework.load_variable(args.checkpoint_dir, from_name)
            assign_ops.append(tf.assign(var, var_value))
        sess.run(assign_ops)
        print('Model loaded.')
        result = sess.run(output)
        #cv2.imwrite('./places2_256/'+str(i)+'.jpg',((imgs[0]+1)*127.5).astype("uint8"))
        cv2.imwrite(args.output+str(i)+'.jpg', result[0][:, :, ::-1])
Facial-Micro-ExpressionGC commented 4 years ago

Hi JiahuiYu, Sorry for bothering you. When running the test script you have here, I get the following error. Traceback (most recent call last): File "test.py", line 66, in output = model.build_server_graph(FLAGS, input_image) File "/home/staff/jireh/Year1-Ph.D/RD2/generative_inpainting-master/inpaint_model.py", line 293, in build_server_graph xin, masks, reuse=reuse, training=is_training) File "/home/staff/jireh/Year1-Ph.D/RD2/generative_inpainting-master/inpaint_model.py", line 49, in build_inpaint_net x = gen_conv(x, cnum, 5, 1, name='conv1') File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 182, in func_with_args return func(*args, current_args) File "/home/staff/jireh/Year1-Ph.D/RD2/generative_inpainting-master/inpaint_ops.py", line 48, in gen_conv activation=None, padding=padding, name=name) File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/python/layers/convolutional.py", line 417, in conv2d return layer.apply(inputs) File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 817, in apply return self.call(inputs, *args, kwargs) File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/python/layers/base.py", line 374, in call outputs = super(Layer, self).call*(inputs, args, kwargs) File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 746, in call self.build(input_shapes) File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/python/keras/layers/convolutional.py", line 165, in build dtype=self.dtype) File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/python/layers/base.py", line 288, in add_weight getter=vs.get_variable) File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 609, in add_weight aggregation=aggregation) File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/python/training/checkpointable/base.py", line 639, in _add_variable_with_custom_getter **kwargs_for_getter) File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 1487, in get_variable aggregation=aggregation) File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 1237, in get_variable aggregation=aggregation) File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 540, in get_variable aggregation=aggregation) File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 492, in _true_getter aggregation=aggregation) File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 861, in _get_single_variable name, "".join(traceback.format_list(tb)))) ValueError: Variable inpaint_net/conv1/kernel already exists, disallowed. Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope? Originally defined at:

File "/home/staff/jireh/Year1-Ph.D/RD2/generative_inpainting-master/inpaint_ops.py", line 48, in gen_conv activation=None, padding=padding, name=name) File "/home/staff/jireh/anaconda3/envs/deeplearning/lib/python3.6/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 182, in func_with_args return func(*args, **current_args) File "/home/staff/jireh/Year1-Ph.D/RD2/generative_inpainting-master/inpaint_model.py", line 49, in build_inpaint_net x = gen_conv(x, cnum, 5, 1, name='conv1')

Below is my code: import argparse

import os import cv2 import numpy as np import tensorflow as tf import neuralgym as ng

from inpaint_model import InpaintCAModel

def dir_path(string): if os.path.isdir(string): return string else: raise NotADirectoryError(string)

parser = argparse.ArgumentParser() parser.add_argument('--image', default='', type=str, help='The filename of image to be completed.') parser.add_argument('--mask', default='', type=str, help='The filename of mask, value 255 indicates mask.') parser.add_argument('--output', default='output.png', type=str, help='Where to write output.') parser.add_argument('--checkpoint_dir', default='', type=str, help='The directory of tensorflow checkpoint.')

if name == "main": FLAGS = ng.Config('inpaint.yml')

ng.get_gpus(1)

args, unknown = parser.parse_known_args()

args = vars(parser.parse_args())

model = InpaintCAModel() mask_paths = os.listdir(args.mask) print(len(mask_paths)) img_paths = os.listdir(args.image) for i in range(len(mask_paths)): image = cv2.imread(args.image+img_paths[i])

image = cv2.imread(args["image"])

print("Hi there {}, it's nice to meet you!".format(args.image))

print(image.shape[1])

mask = cv2.imread(args["mask"])

mask = cv2.imread(args.mask+img_paths[i])

    #mask = cv2.cvtColor(mask, cv2.COLOR_GRAY2RGB)
    #mask = mask[:,:,0:3]
    print(mask.shape)
    # mask = cv2.resize(mask, (0,0), fx=0.5, fy=0.5)

    assert image.shape == mask.shape

    h, w, _ = image.shape
    grid = 8
    image = image[:h//grid*grid, :w//grid*grid, :]
    mask = mask[:h//grid*grid, :w//grid*grid, :]
    print('Shape of image: {}'.format(image.shape))

    image = np.expand_dims(image, 0)
    mask = np.expand_dims(mask, 0)
    input_image = np.concatenate([image, mask], axis=2)

    sess_config = tf.ConfigProto()
    sess_config.gpu_options.allow_growth = True
    with tf.Session(config=sess_config) as sess:
        input_image = tf.constant(input_image, dtype=tf.float32)
        output = model.build_server_graph(FLAGS, input_image)
        output = (output + 1.) * 127.5
        output = tf.reverse(output, [-1])
        output = tf.saturate_cast(output, tf.uint8)
        # load pretrained model
        vars_list = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES)
        assign_ops = []
        for var in vars_list:
            vname = var.name
            from_name = vname
            var_value = tf.contrib.framework.load_variable(args.checkpoint_dir, from_name)
            assign_ops.append(tf.assign(var, var_value))
        sess.run(assign_ops)
        print('Model loaded.')
        result = sess.run(output)
        #cv2.imwrite('./places2_256/'+str(i)+'.jpg',((imgs[0]+1)*127.5).astype("uint8"))
        cv2.imwrite(args.output+str(i)+'.jpg', result[0][:, :, ::-1])
JiahuiYu commented 4 years ago

Could you please stop creating new issues as they are not necessary?

Facial-Micro-ExpressionGC commented 4 years ago

I’m sorry! Thanks

Get Outlook for iOShttps://aka.ms/o0ukef


From: JiahuiYu notifications@github.com Sent: Friday, February 28, 2020 9:32:04 PM To: JiahuiYu/generative_inpainting generative_inpainting@noreply.github.com Cc: Jireh Jam JIREH.JAM@stu.mmu.ac.uk; Author author@noreply.github.com Subject: Re: [JiahuiYu/generative_inpainting] AttributeError: 'Tensor' object has no attribute 'guided' (#410)

Could you please stop creating new issues as they are not necessary?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/JiahuiYu/generative_inpainting/issues/410?email_source=notifications&email_token=AKUKH7DKESPZ7F7DO647ZCTRFF7FJA5CNFSM4K53ALYKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOENKHQXA#issuecomment-592738396, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AKUKH7B3FJ73C4Q2CSPNVLTRFF7FJANCNFSM4K53ALYA.

JiahuiYu commented 4 years ago

ValueError: Variable inpaint_net/conv1/kernel already exists, disallowed. Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope?

Please check this error carefully. It seems you are not using the right pretrained checkpoint.