OCR-D / ocrd_anybaseocr

DFKI Layout Detection for OCR-D
Apache License 2.0
48 stars 12 forks source link

Error executing dewarp on page level Version 1.0.1, ocrd/core 2.17.0 #72

Closed VolkerHartmann closed 2 years ago

VolkerHartmann commented 4 years ago

I got the following error message while executing dewarping with the model listed in the docu: ocrd-anybaseocr-dewarp --mets /test/data/mets.xml --working-dir /test/data --input-file-grp OCR-D-PAGE-DESKEW --output-file-grp OCR-D-PAGE-DEWARP --parameter {"model_path":"/test/python/pix2pixHD/models/latest_net_G.pth"} --log-level ERROR 06:57:13.179 CRITICAL root - getLogger was called before initLogging. Source of the call: 06:57:13.179 CRITICAL root - File "/ocrd_all/venv/local/sub-venv/headless-tf22/lib/python3.6/site-packages/ocrd_anybaseocr/cli/ocrd_anybaseocr_dewarp.py", line 35, in 06:57:13.179 CRITICAL root - LOG = getLogger('OcrdAnybaseocrDewarper') 06:57:13.179 INFO root - Overriding log level globally to ERROR 06:57:13.180 CRITICAL root - initLogging was called multiple times. Source of latest call: 06:57:13.180 CRITICAL root - File "/ocrd_all/venv/local/sub-venv/headless-tf22/lib/python3.6/site-packages/ocrd/decorators/init.py", line 40, in ocrd_cli_wrap_processor 06:57:13.180 CRITICAL root - initLogging() /ocrd_all/venv/local/sub-venv/headless-tf22/lib/python3.6/site-packages/torchvision/transforms/transforms.py:257: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead. "please use transforms.Resize instead.") /ocrd_all/venv/local/sub-venv/headless-tf22/lib/python3.6/site-packages/ocrd_anybaseocr/pix2pixhd/models/pix2pixHD_model.py:128: UserWarning: volatile was removed and now has no effect. Use with torch.no_grad(): instead. input_label = Variable(input_label, volatile=infer) Traceback (most recent call last): File "/ocrd_all/venv/local/sub-venv/headless-tf22/bin/ocrd-anybaseocr-dewarp", line 8, in sys.exit(cli()) File "/ocrd_all/venv/local/sub-venv/headless-tf22/lib/python3.6/site-packages/click/core.py", line 829, in call return self.main(args, kwargs) File "/ocrd_all/venv/local/sub-venv/headless-tf22/lib/python3.6/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/ocrd_all/venv/local/sub-venv/headless-tf22/lib/python3.6/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, ctx.params) File "/ocrd_all/venv/local/sub-venv/headless-tf22/lib/python3.6/site-packages/click/core.py", line 610, in invoke return callback(args, *kwargs) File "/ocrd_all/venv/local/sub-venv/headless-tf22/lib/python3.6/site-packages/ocrd_anybaseocr/cli/ocrd_anybaseocr_dewarp.py", line 189, in cli return ocrd_cli_wrap_processor(OcrdAnybaseocrDewarper, args, kwargs) File "/ocrd_all/venv/local/sub-venv/headless-tf22/lib/python3.6/site-packages/ocrd/decorators/init.py", line 81, in ocrd_cli_wrap_processor run_processor(processorClass, ocrd_tool, mets, workspace=workspace, kwargs) File "/ocrd_all/venv/local/sub-venv/headless-tf22/lib/python3.6/site-packages/ocrd/processor/helpers.py", line 68, in run_processor processor.process() File "/ocrd_all/venv/local/sub-venv/headless-tf22/lib/python3.6/site-packages/ocrd_anybaseocr/cli/ocrd_anybaseocr_dewarp.py", line 139, in process model, dataset, page, page_xywh, page_id, input_file, orig_img_size, n) File "/ocrd_all/venv/local/sub-venv/headless-tf22/lib/python3.6/site-packages/ocrd_anybaseocr/cli/ocrd_anybaseocr_dewarp.py", line 169, in _process_segment data['label'], data['inst'], data['image']) File "/ocrd_all/venv/local/sub-venv/headless-tf22/lib/python3.6/site-packages/ocrd_anybaseocr/pix2pixhd/models/pix2pixHD_model.py", line 216, in inference fake_image = self.netG.forward(input_concat) File "/ocrd_all/venv/local/sub-venv/headless-tf22/lib/python3.6/site-packages/ocrd_anybaseocr/pix2pixhd/models/networks.py", line 211, in forward return self.model(input) File "/ocrd_all/venv/local/sub-venv/headless-tf22/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(input, kwargs) File "/ocrd_all/venv/local/sub-venv/headless-tf22/lib/python3.6/site-packages/torch/nn/modules/container.py", line 117, in forward input = module(input) File "/ocrd_all/venv/local/sub-venv/headless-tf22/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(input, kwargs) File "/ocrd_all/venv/local/sub-venv/headless-tf22/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 419, in forward return self._conv_forward(input, self.weight) File "/ocrd_all/venv/local/sub-venv/headless-tf22/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 416, in _conv_forward self.padding, self.dilation, self.groups) RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same

Used processors in beforehand: Processor Version Parameter
ocrd-cis-ocropy-binarize 0.1.2 {“method”:“ocropy”,“level-of-operation”:“page”}
ocrd-anybaseocr-crop 1.0.1 {“operation_level”:“page”}
ocrd-skimage-binarize 0.1.0 {“method”:“li”,“level-of-operation”:“page”}
ocrd-skimage-denoise 0.1.0 {“level-of-operation”:“page”}
ocrd-tesserocr-deskew 0.9.3 {“operation_level”:“page”}

 

kba commented 2 years ago

fixed by #89