Based on the Pytorch-Transformers library by HuggingFace. To be used as a starting point for employing Transformer models in text classification tasks. Contains code to easily train BERT, XLNet, RoBERTa, and XLM models for text classification.
Apache License 2.0
306
stars
97
forks
source link
where is the positional embedding in the Bert model inputs #32
First thanks for sharing the code, it's really helpful!!
I have a question when I tried to use the pretrained Bert on my dataset for sentence classification. I realize that in Bert, the input feature should be consist of token embedding, segment embedding and position embedding. But I'm not seeing the positional embedding in your code. In run_model:
inputs = {'input_ids': batch[0],
'attention_mask': batch[1],
'token_type_ids': batch[2] if args['model_type'] in ['bert', 'xlnet'] else None, # XLM don't use segment_ids
'labels': batch[3]}
outputs = model(**inputs)
Or I might miss this detail, could you please tell me whether you implement this, and if so where exactly?
First thanks for sharing the code, it's really helpful!!
I have a question when I tried to use the pretrained Bert on my dataset for sentence classification. I realize that in Bert, the input feature should be consist of token embedding, segment embedding and position embedding. But I'm not seeing the positional embedding in your code. In run_model:
Or I might miss this detail, could you please tell me whether you implement this, and if so where exactly?
Thanks again and looking forward to your reply!