facebookresearch / TaBERT

This repository contains source code for the TaBERT model, a pre-trained language model for learning joint representations of natural language utterances and (semi-)structured tables for semantic parsing. TaBERT is pre-trained on a massive corpus of 26M Web tables and their associated natural language context, and could be used as a drop-in replacement of a semantic parsers original encoder to compute representations for utterances and table schemas (columns).
Other
580 stars 63 forks source link

Prediction on large tables #9

Open Sharathmk99 opened 3 years ago

Sharathmk99 commented 3 years ago

Hi, Thank you amazing source code. I have table size of 20,000 rows and 40 columns, can i still use TaBERT for prediction. Any limitation on sequence length?

Thank you!!