Living-with-machines / DeezyMatch

A Flexible Deep Learning Approach to Fuzzy String Matching
https://living-with-machines.github.io/DeezyMatch/
Other
139 stars 34 forks source link

Read and use vocab when fine-tuning / testing a model on new data to deal with missing chars #15

Closed fedenanni closed 4 years ago

fedenanni commented 4 years ago

@kasra-hosseini @mcollardanuy when fine-tuning across datasets I get a bug and it might be the bug we were expecting.

Screenshot 2020-06-04 at 16 57 49

To reproduce it in the VM you need.

1) to be in 15-check-vocab or in develop after the merge of the new PR

2) have a model created on gb1900 test

3) run this python DeezyMatch.py -i input_dfm.yaml -d /home/mariona/githubCode/DeezyMatch/dataset/ocr_test.txt -f gb1900 -n 100 -m finetuned_model

kasra-hosseini commented 4 years ago

Was gb1900 model created using the latest version of DeezyMatch? My guess is that there is an inconsistency between gb1900 model architecture and the current GRU net. I could reproduce the error (on gb1900 model), but the following commands work:

  1. create a new model:

    python DeezyMatch.py -i input_dfm.yaml -d dataset/dataset-string-similarity_test.txt -m test001_for_finetune
  2. Fine-tune the model:

    python DeezyMatch.py -i ./models/test001_for_finetune/input_dfm.yaml -d /home/mariona/githubCode/DeezyMatch/dataset/ocr_test.txt -f test001_for_finetune -m FT_test001_for_finetune
kasra-hosseini commented 4 years ago

btw, would it be possible to change the -f flag such that it also accepts absolute paths?

e.g., the following command does not work now (see -f flag)

python DeezyMatch.py -i ./models/test001_for_finetune/input_dfm.yaml -d /home/mariona/githubCode/DeezyMatch/dataset/ocr_test.txt -f /datadrive/khosseini/DeezyMatch/models/test001_for_finetune -m FT_test001_for_finetune
mcollardanuy commented 4 years ago

Hi! I have also just tried it (in the 15-check-vocab branch), with Kasra's command from the previous comment, using:

It all worked with no errors, but is it correct that we get such high numbers in the training and such a difference in the validation?

wikigazgr
fedenanni commented 4 years ago

@kasra-hosseini no prob, i'll take care now of the -f flag for absolute paths

@mcollardanuy I think it's because it overfits like crazy on the training set with only 100 training instances.

I'll fix the absolute paths, but it would be good to test it across resources that do not share completely the vocabulary. I'll train again the gb1900 model and then try to fine-tune it on ocr and let you know how it goes

kasra-hosseini commented 4 years ago

@fedenanni Can we close this?

fedenanni commented 4 years ago

yes i think so