Closed aliericcantona closed 4 years ago
Thanks for your attention.
The link is still on that page.
However, you can directly download the vgg19 here.
The code does not work tf version 2.0
Then, I downgraded the tensorflow to 1.14.0 ....
I ran this command python3 train_sr.py --task Models/srdplus-pretrained --data_dir Samples/ --use_gpu 0 --is_training 0
Am I missing something here?
**Error ...
Traceback (most recent call last):
File "train_sr.py", line 52, in
shadow_free_image,predicted_mask=build_aggasatt_joint(input,channel,vgg_19_path=vgg_19_path) (Pdb)
failed on this line [i] Hypercolumn ON, building hypercolumn features ... *** TypeError: expected str, bytes or os.PathLike object, not dict
I fixed the error. You have a bug in the network code. Line 34 of train_sr.py
change it to vgg_19_path = './Models/imagenet-vgg-verydeep-19.mat'
the next error is
[i] contain checkpoint: None
Traceback (most recent call last):
File "train_sr.py", line 97, in
Fixed that too ... you sould copy Models/srdplus-pretrained ---> logs/Models/srdplus-pretrained
I think there is a folder structure for data_dir on the command line that code needs. Can you tell me how I can correct based on your Samples/
python3 train_sr.py --task Models/srdplus-pretrained --data_dir Samples/ --use_gpu 0 --is_training 0 (This is not correct data_dir for the code)
I think you need some certain structure for the data_dir to get your proper output. Can you elaborate on that. I need simple data_dir (folder) ---> using your code ---> output_dir removed shadows. I think the code needs to be modified a bit.
I made test_A, test_B, test_T all the same and inside copied Samples images . why do I need these three folders? I can get something out of the network finally.
Can you possibly explain more. I can help you out to simplify your code. Not easy to use to be honest.
And the network is super slow without GPU is around 52second per image of 640x480.
I could generate the images but did not understand why you need three set of folders to generate the output. If you elaborate more I will help you out to simplify the code and be more user friendly.
Sorry for the inconvenience.
Sure, you need to use the TensorFlow 1.x
about the test, I have made an online demo or a local Jupiter notebook(demo.ipynb) for testing by just clicking the running button. Directly testing from the command line is not recommended currently, You should train the code and test them by the command line.
I don't need three folders for testing. However, my data loader loads them at once. Thus, if you only want to inference the network, you can try to reimplement the prepare_image
function in util.py
Still thanks for your opinion, I will try to make the code more smooth.
Our network is pretty heavy because of the hyper-features in VGG19 and dilated convolutions. A much better algorithm is working on and will be open-sourced upon publication.
Or in the testing, you can just loop the folder and read the images by OpenCV.
The following code is directly from the demo:
import os,cv2
# some samples results.
plt.rcParams["figure.figsize"] = (24,6)
for img_path in [os.path.join(sample_path,x) for x in os.listdir(sample_path) if '.jpg' in x or '.png' in x]:
plt.figure()
plt.axis('off')
iminput=cv2.imread(img_path,-1)
imoutput = sess.run(shadow_free_image,feed_dict={input:np.expand_dims(iminput/255.,axis=0)})
imoutput = np.uint8(np.squeeze(np.minimum(np.maximum(imoutput[0],0.0),1.0))*255.0)
imcompare = np.concatenate([iminput,imoutput],axis=1)
# bgr->rgb
plt.imshow(imcompare[...,::-1])
plt.show()
awesome I will add that tomorrow to my code.
Hi,
Can you point out to the vgg pretrained model to download? Looks like that model no longer part of the url you provided.
Thanks