Belval / CRNN

A TensorFlow implementation of https://github.com/bgshih/crnn
MIT License
299 stars 101 forks source link

Testing problem #28

Closed yeLer closed 6 years ago

yeLer commented 6 years ago

When I test the result by python3 run.py -ex ../out --test --restore ,I can't get any results. The out folder contains the pictures generated by the code form your repo(TextRecognitionDataGenerator). The console just outputs these. Restoring Checkpoint is valid 0 Loading data Testing

Process finished with exit code 0

Thank you for help me!

LJXLJXLJX commented 6 years ago

I meet the same problem, have you solved it?

Belval commented 6 years ago

I was able to test, download the file I attached, unzip it.

Then call python3 run.py -ex UNZIP_PATH --test --restore where UNZIP_PATH should be the unzipped folder path.

test.zip

It works on my end, please tell me if this solves your issue.

Adarsh-sophos commented 5 years ago

Hey @Belval , Can you tell how to run this code for only one image example ? I tried to run this command - python run.py -ex ../samples/ --test --restore -bs 1

but it gives error -

Traceback (most recent call last):
  File "run.py", line 118, in <module>
    main()
  File "run.py", line 111, in main
    args.restore
  File "C:\Users\adars\Downloads\CRNN\CRNN\crnn.py", line 53, in __init__
    self.__data_manager = DataManager(batch_size, model_path, examples_path, max_image_width, train_test_ratio, self.__max_char_count)
  File "C:\Users\adars\Downloads\CRNN\CRNN\data_manager.py", line 28, in __init__
    self.test_batches = self.__generate_all_test_batches()
  File "C:\Users\adars\Downloads\CRNN\CRNN\data_manager.py", line 112, in __generate_all_test_batches
    (-1)
  File "C:\Users\adars\Downloads\CRNN\CRNN\utils.py", line 17, in sparse_tuple_from
    indices.extend(zip([n]*len(seq), [i for i in range(len(seq))]))
TypeError: object of type 'numpy.int32' has no len()
Belval commented 5 years ago

Another issue (I'm having trouble finding it right now) highlighted that this was not possible right now due to how the data manager class handles batches.

I'll try to find time to fix this. In the meantime, you can try to fix it. I'd accept a PR for this.