-
我要做的项目是要求用roberta的,请问各位大佬我可以直接把chinese-roberta-wwm-ext对应的模型和vocab放在 bert_pretain目录下然后直接训练吗?
-
Run in the Nightly FD suite
`pytest models/experimental/roberta/tests/test_roberta_for_token_classification.py::test_roberta_for_token_classification`
is a GS only test, currently failing with th…
-
Code breaks using a different model other than BERT. I debugged into the code and found that the code is written with respect to BERT tokenizer only while the tokenizers of other transformer models ar…
-
System Info
GPU: NVIDIA RTX 4090
TensorRT-LLM 0.13
quest 1: How can I use the OpenAPI to perform inference on a TensorRT engine model?
root@docker-desktop:/llm/tensorrt-llm-0.13.0/examples/apps# pyt…
-
Hi, has anybody looked into training a version of udify with XLM-RoBERTa? Seems like it could help with the low-resource languages in multilingual BERT so I'm planning on giving it a go if nobody else…
-
## Description
Since the **INormalization** layer was added in TRT8.6, I do some tests with the fp16's accuracy:
1. First, I use huggingface‘s bert-base-cased, exported it to onnx(opset17). Then …
-
I'm getting an error when I'm trying to preprocess by command bash preprocess.sh:
My error is:
/usr/bin/python3: Error while finding module specification for 'examples.roberta.multiprocessing_bpe_en…
-
Can someone train it with unfrozen RoBERTa and upload checkpoint?
-
RuntimeError: mat1 and mat2 shapes cannot be multiplied (11264x1024 and 768x6)
当
```
'roberta': (
'transformers.BertTokenizer',
'transformers.RobertaModel',
'transforme…
-
Tried to load a finetuned roberta model with
```
import mii
pipe = mii.pipeline("roberta fine-tuned model path")
```
But it shows error
```
ValueError: Unsupported model type roberta
```