awslabs / gap-text2sql

GAP-text2SQL: Learning Contextual Representations for Semantic Parsing with Generation-Augmented Pre-Training
https://arxiv.org/abs/2012.10309
Apache License 2.0
100 stars 25 forks source link

How to comprehend the evaluation results? #7

Open yaoyiyao-yao opened 3 years ago

yaoyiyao-yao commented 3 years ago

Hello, when I get the evaluation results,I am not sure the meaning of "predicted_parse_error" and "exact".I guess when "predicted_parse_error" is true it means the model can’t produce a predicted sql,is that right?And I found "exact" has 3 possibilities:true,0 and false,I guess when "exact" is true ,it means the predicted sql is right.But what do 0 and false mean? Thank you.

Impavidity commented 2 years ago

You can use the official evaluation script https://github.com/taoyds/spider to evaluate the outputs. For customized evaluation in the codebase, "predicted_parse_error": true means the model cannot produce sql. both 0 and false means the predicted sql is wrong.