vinthony / ghost-free-shadow-removal

[AAAI 2020] Towards Ghost-free Shadow Removal via Dual Hierarchical Aggregation Network and Shadow Matting GAN
https://arxiv.org/abs/1911.08718
297 stars 59 forks source link

pretrained vgg19 #10

Closed aliericcantona closed 4 years ago

aliericcantona commented 4 years ago

Hi,

Can you point out to the vgg pretrained model to download? Looks like that model no longer part of the url you provided.

Thanks

vinthony commented 4 years ago

Thanks for your attention.

The link is still on that page.

However, you can directly download the vgg19 here.

aliericcantona commented 4 years ago

The code does not work tf version 2.0

Then, I downgraded the tensorflow to 1.14.0 ....

I ran this command python3 train_sr.py --task Models/srdplus-pretrained --data_dir Samples/ --use_gpu 0 --is_training 0

Am I missing something here?

**Error ...

Traceback (most recent call last): File "train_sr.py", line 52, in

shadow_free_image,predicted_mask=build_aggasatt_joint(input,channel,vgg_19_path=vgg_19_path) (Pdb)

failed on this line [i] Hypercolumn ON, building hypercolumn features ... *** TypeError: expected str, bytes or os.PathLike object, not dict

aliericcantona commented 4 years ago

I fixed the error. You have a bug in the network code. Line 34 of train_sr.py

change it to vgg_19_path = './Models/imagenet-vgg-verydeep-19.mat'

aliericcantona commented 4 years ago

the next error is

[i] contain checkpoint: None Traceback (most recent call last): File "train_sr.py", line 97, in print('loaded '+ckpt.model_checkpoint_path) AttributeError: 'NoneType' object has no attribute 'model_checkpoint_path'

aliericcantona commented 4 years ago

Fixed that too ... you sould copy Models/srdplus-pretrained ---> logs/Models/srdplus-pretrained

aliericcantona commented 4 years ago

I think there is a folder structure for data_dir on the command line that code needs. Can you tell me how I can correct based on your Samples/

python3 train_sr.py --task Models/srdplus-pretrained --data_dir Samples/ --use_gpu 0 --is_training 0 (This is not correct data_dir for the code)

aliericcantona commented 4 years ago

I think you need some certain structure for the data_dir to get your proper output. Can you elaborate on that. I need simple data_dir (folder) ---> using your code ---> output_dir removed shadows. I think the code needs to be modified a bit.

aliericcantona commented 4 years ago

I made test_A, test_B, test_T all the same and inside copied Samples images . why do I need these three folders? I can get something out of the network finally.

Can you possibly explain more. I can help you out to simplify your code. Not easy to use to be honest.

And the network is super slow without GPU is around 52second per image of 640x480.

aliericcantona commented 4 years ago

I could generate the images but did not understand why you need three set of folders to generate the output. If you elaborate more I will help you out to simplify the code and be more user friendly.

vinthony commented 4 years ago

Sorry for the inconvenience.

  1. Sure, you need to use the TensorFlow 1.x

  2. about the test, I have made an online demo or a local Jupiter notebook(demo.ipynb) for testing by just clicking the running button. Directly testing from the command line is not recommended currently, You should train the code and test them by the command line.

  3. I don't need three folders for testing. However, my data loader loads them at once. Thus, if you only want to inference the network, you can try to reimplement the prepare_image function in util.py

  4. Still thanks for your opinion, I will try to make the code more smooth.

  5. Our network is pretty heavy because of the hyper-features in VGG19 and dilated convolutions. A much better algorithm is working on and will be open-sourced upon publication.

aliericcantona commented 4 years ago
  1. your version of prepare_image relies on the three folders. I tried to only use one but I had a bit problem with creating the tf image.
  2. No problem. I am here to help once Ph.D. Student and now more of product software engineer and startuper.
  3. Got it.
vinthony commented 4 years ago

Or in the testing, you can just loop the folder and read the images by OpenCV.

The following code is directly from the demo:

import os,cv2
# some samples results.
plt.rcParams["figure.figsize"] = (24,6)

for img_path in [os.path.join(sample_path,x) for x in os.listdir(sample_path) if '.jpg' in x or '.png' in x]:

    plt.figure()
    plt.axis('off')

    iminput=cv2.imread(img_path,-1)
    imoutput = sess.run(shadow_free_image,feed_dict={input:np.expand_dims(iminput/255.,axis=0)})
    imoutput = np.uint8(np.squeeze(np.minimum(np.maximum(imoutput[0],0.0),1.0))*255.0)
    imcompare = np.concatenate([iminput,imoutput],axis=1)

    # bgr->rgb
    plt.imshow(imcompare[...,::-1])
    plt.show()
aliericcantona commented 4 years ago

awesome I will add that tomorrow to my code.