gwxie / Dewarping-Document-Image-By-Displacement-Flow-Estimation

Dewarping Document Image By Displacement Flow Estimation with Fully Convolutional Network
MIT License
162 stars 36 forks source link

Cannot run python test.py #2

Open KakaVlasic opened 3 years ago

KakaVlasic commented 3 years ago

Hi, Thanks for sharing this excellent work for document rectification! But I can't run through the released codes with data structure like you arranged.Please help me!

gwxie commented 3 years ago

Hi @KakaVlasic , Thank you for testing the codes. Can you list or simply state your warnings?

KakaVlasic commented 3 years ago

@gwxie, Thanks for so quickly reply! I think FlatImg.validateOrTestModelV2GreyC1 wasn't load test data, The errors like this: 图片 And I arrange code and data structure like this: 图片

gwxie commented 3 years ago

Hi, @KakaVlasic , Thanks for the bug you found! Because I have not tested this function “def resize_image(origin_img):”. If you are using the test image I uploaded in "dataset", please delete the function "resize_image(...)". Also, please check if the path is correct.

Best!

KakaVlasic commented 3 years ago

Hi, @gwxie , Thanks for figure out the bug. Now I can run through test, but the results are somehow strange. This accounts for no background segmentation implementation? 9_1 copy Any suggestions would be appreciated.

gwxie commented 3 years ago

Hi @KakaVlasic,thanks for the question. You are right, it shows that the background is not removed. I guess you set “schema=test”. Is there any other modification in the function"utils.py/flatByRegressWithClassiy_triangular_v2_RGB(...)"? Please check this part of the code, I added a mark.

image

If it runs normally, the result should look like the picture above.

Best!

KakaVlasic commented 3 years ago

Hi, @gwxie, Sorry to bother you again. Now, I set “schema=test” and use default utils.py(change data path and save_image flag), but some files get errors and no dewarp results in ./flat. 图片 Could you please hint me what's wrong with it. Thanks again!

gwxie commented 3 years ago

Hi @KakaVlasic , Only these files are wrong, are the others normal? Can you provide the location of the error by Debuging or removing "try: ... except:"? Thanks.

timfu248 commented 3 years ago

'Namespace' object has no attribute 'img_shrink'

------load DilatedResnetForFlatByClassifyWithRgressV2v6v4c1GN------

Loading model and optimizer from checkpoint './2019-06-25 11:52:54/49/2019-06-25 11_52_54flat_img_classifyAndRegress_grey-data1024_greyV2.pkl' Loaded checkpoint './2019-06-25 11:52:54/49/2019-06-25 11_52_54flat_img_classifyAndRegress_grey-data1024_greyV2.pkl' (epoch 49) Traceback (most recent call last): File "test.py", line 146, in train(args) File "test.py", line 75, in train FlatImg.loadTestData() File "/workspace/HOSTDIR/Dewarping/Dewarping-Document-Image-By-Displacement-Flow-Estimation/Source/utils.py", line 425, in loadTestData t1_loader = self.data_loader(self.data_path_test, split='test', img_shrink=self.args.img_shrink, is_return_img_name=True) AttributeError: 'Namespace' object has no attribute 'img_shrink'

timfu248 commented 3 years ago

isfile or isdir ?

def getDatasets(dir): if not os.path.isfile(dir): raise Exception(dir+' -- path no find') return os.listdir(dir)

Morton9 commented 2 years ago

@timfu248 Hello, I think it should be 'isdir', because an error will be reported if it is 'isfile'.

learning09 commented 1 year ago

I tried running the code without using the given pretrained model, and trained from scratch for around 30 epochs. But the folds and curves are not removed from the image, only the background is cleaned as per below figure and below command :-

!python train.py --data_path_train=dataset/train/data1024_greyV2/color/ --data_path_validate=dataset/train/data1024_greyV2/color/ --data_path_test=dataset/shrink_1024_960/crop/ --parallel 0 --batch_size 2 --schema train --n_epoch 30 image

Could you please suggest the reason as to why the folds and curves are not getting removed and how to fix it?
Also, can i train on a small number of images instead of taking a huge dataset due to GPU limitations ?