Closed PunitShah1988 closed 4 years ago
Hello @PunitShah1988, this is the current data preparation pipeline:
tapas/utils/task_utils.py
for reference.Interactions
, which contains the questions, answers and table information. From there we convert into tf_examples, which are the numeric features (after text is tokenized and mapped to indexes) that are fed into the model and used for training. This happens in the module tapas/utils/tf_example_utils.py
The colab we added to the repo shows how to create an Interaction, and save it directly as a tf_record. If you do that for all of your data then you can run the train job directly.
Alternatively you can try to get your data into the same format as SQA TSVs or add your own conversion code to task_utils
. You may also want to think about which of the three datasets your problem is more similar to, is it conversational (SQA), does it require aggregations (WTQ, WikiSQL), do you have supervision for which cells to aggregate (WIkiSQL), etc...
Hope this helps, good luck!
Hi,
I am curious if you can guide or confirm how I can use TAPAS framework on my custom data. I couldn't find any resources which talks about this.
Thank you.