facebookresearch / TaBERT

This repository contains source code for the TaBERT model, a pre-trained language model for learning joint representations of natural language utterances and (semi-)structured tables for semantic parsing. TaBERT is pre-trained on a massive corpus of 26M Web tables and their associated natural language context, and could be used as a drop-in replacement of a semantic parsers original encoder to compute representations for utterances and table schemas (columns).
Other
580 stars 63 forks source link

What is the next step after I downloaded pretrain model #28

Open Tizzzzy opened 8 months ago

Tizzzzy commented 8 months ago

Hi, Which file should I put this code into after I downloaded the pretrain model:

from table_bert import TableBertModel

model = TableBertModel.from_pretrained('\Path to \TaBERT\tabert_base_k1.tar.gz')