-
Hi,
I am trying to run mBERT on my machine but here comes an error,
cmd@zkti:/mnt/disk1/cmd/mBERT$ ./mBERT.sh -in=/mnt/disk1/cmd/defects4j_fixed/Chart/Chart_1_fixed/source/org/jfree/chart/annotation…
-
Continuing my previous post: [How i do models chain processing and batch processing for analyzing text data?](https://github.com/pytorch/serve/issues/1055)
Can I create two workflows using the same…
-
It would be helpful to see those models from HF on the leaderboard:
* `xlm-roberta-base` (base of HerBERT)
* `xlm-roberta-large` (base of HerBERT)
* `facebook/xlm-roberta-xl` - needs more VRAM
* `…
-
Hi there,
When I run finetune-exp2.sh, I get crash at the inference part:-
------
python3 inference.py --cktpath checkpoints/exp2/pflen5_iter5_loss1_1_2_lr0.0001_bsz2_seed128/checkpoint_best.pt
…
-
## 🐛 Bug
**Describe the bug**
Hi,
I've scripted a Roberta model and when I do two inference calls on it, the second call returns a result only after several minutes (up to 16 minutes).
I see t…
mreso updated
2 years ago
-
I have finetuned ROBERTA model(TFAutomodelforsequenceclassification busased ) in my local path
How can I perform text classification inferencing from that model using deepsparse?
I am using GPU and …
-
I don't know why occur this problem, thank you for reading
OSError: Model name '../pretrained_models/RoBERTa-zh-Large/' was not found in tokenizers model name list (roberta-base, roberta-large, rob…
-
## Description
I was experimenting with the `sentence-transformers/msmarco-roberta-base-ance-firstp` model and observed some discrepancies between the outputs of the tokenizer depending on how the …
-
After training on a seperate machine we got some promising results, and we are now looking to move our model into production. However we encounter an issue. Downloading missing files and verifying the…
-
模型ViT-H/14 单机版A6000训练以及部署改参数后脚本
##训练脚本
#!/usr/bin/env
# Guide:
# This script supports distributed training on multi-gpu workers (as well as single-worker training).
# Please set the options …