google-research / tapas

End-to-end neural table-text understanding models.
Apache License 2.0
1.15k stars 217 forks source link

The Result of WTQ on tapas_wtq_wikisql_sqa_masklm_large_reset model is only 39.04% #46

Closed lairikeqiA closed 4 years ago

lairikeqiA commented 4 years ago

Thanks for releasing the new pre-trained model yesterday. I test the WTQ on tapas_wtq_wikisql_sqa_masklm_large_reset model. But the result is only 39.04% on the WTQ test dataset. What is the reason about that? My commend line is as follows: python3 run_task_main.py \ --task=WTQ \ --output_dir=/home/cjc/tapas-master/WTQ --model_dir=/home/cjc/tapas-master/tapas_wtq_wikisql_sqa_masklm_large_reset \ --init_checkpoint=/home/cjc/tapas-master/tapas_wtq_wikisql_sqa_masklm_large_reset/model.ckpt \ --bert_config_file=/home/cjc/tapas-master/tapas_wtq_wikisql_sqa_masklm_large_reset/bert_config.json \ --mode="predict_and_evaluate" --use_tpu=False --iterations_per_loop=5 --test_batch_size=4 --max_seq_length=512

ghost commented 4 years ago

You need to set the reset parameter and I am just realizing that that's not possible with the current run_task_main. I'll push a fix for that.

In the meantime, can you test that the tapas_wtq_wikisql_sqa_masklm_large model produces the reported accuracy?

ghost commented 4 years ago

I pushed the fix you can now run run_task_main.py with --reset_position_index_per_cell for the reset models.

lairikeqiA commented 4 years ago

Thanks for your reply. I tested WTQ on the tapas_wtq_wikisql_sqa_masklm_large model just now and got the reported accuracy on readme

ghost commented 4 years ago

Nice! Thanks for verifying.