google-research / tapas

End-to-end neural table-text understanding models.
Apache License 2.0
1.15k stars 217 forks source link

Finetuning WTQ #104

Open sophgit opened 3 years ago

sophgit commented 3 years ago

Hello,

I am trying to finetune the tapas_wtq_wikisql_sqa_masklm_medium_reset. Just to see, if it works in general, I wanted to finetune it on the same data it's already trained on, WTQ. Creating the Training data worked fine. However, using this code !cd tapas \ && python3 tapas/run_task_main.py \ --task="WTQ" \ --output_dir="/content/gdrive/MyDrive/data/tapas1/output1" \ --init_checkpoint="/content/tapas_wtq_wikisql_sqa_masklm_medium_reset/model.ckpt" \ --bert_config_file="/content/tapas_wtq_wikisql_sqa_masklm_medium_reset/bert_config.json" \ --mode="train" \ --use_tpu

first lead to errors, saying no TPU was found, although Colab gave me a TPU. After removing the --use_tpu flag it worked. But training took only 1 minute and created a model folder, which does not contain a vocab.txt. So I thought something is probably wrong. Any clue on what I did wrong?

Thank you!

eisenjulian commented 3 years ago

Hello @sophgit in order to help you better, can you share the logs of both of those errors you describe? Even better, if you can share a colab that reproduces the problem that would help a lot.

sophgit commented 3 years ago

@eisenjulian sure, this is the link to the Colab. https://colab.research.google.com/drive/13r6bewT80vG1LaVyILuo9rTZPqa8STM0?usp=sharing I also attached a screenshot of the model file, which the code produced. image

I'd like to point out again, that I was just trying to see if I could get the finetuning to work before annotating own data. Thank you for your support!