This repository contains source code for the TaBERT model, a pre-trained language model for learning joint representations of natural language utterances and (semi-)structured tables for semantic parsing. TaBERT is pre-trained on a massive corpus of 26M Web tables and their associated natural language context, and could be used as a drop-in replacement of a semantic parsers original encoder to compute representations for utterances and table schemas (columns).
Hi - this is great work! I work at OmniSci, where we have access to several large datasets and contexts that we would like to try with tabert.
Can you provide a minimal example of serving/inference using TaBERT? We use Ray Serve at OmniSci - even taking the basic example here and being able to serve it would be a great pointer to using this.
Hi - this is great work! I work at OmniSci, where we have access to several large datasets and contexts that we would like to try with tabert.
Can you provide a minimal example of serving/inference using TaBERT? We use Ray Serve at OmniSci - even taking the basic example here and being able to serve it would be a great pointer to using this.