Open emtropyml opened 5 years ago
just set the model_type parameter to distilbert and the pretrained weights name as distilbert-base-uncased
@kaushaltrivedi How about for multilingual model. "DistilmBERT" (I want to run this for different languages)
Is the following config correct ? modeltype='DistilmBERT' and tokenizer='distilbert-base-multilingual-cased'
When I check databunch.tokenizer
it shows me transformers.tokenization_distilbert.DistilBertTokenizer
Thanks !
How can I use DistilBERT for multi-label classification for building a fast and deploy-able model?