-
I am doing multi-label classification. Getting this error !
-
Hi Guys,
Is there any way to interpret BERT using shap?
-
bash-4.4$ python main.py --data_dir data/VUA20 --task_name vua --model_type MELBERT --class_weight 3 --bert_model roberta-base --num_train_epoch 3 --train_batch_size 32 --learning_rate 3e-5 --warmup_e…
-
**Version**
See the console output for PyABSA, Torch, Transformers Version
2.4.1.post1
**Describe the bug**
A clear and concise description of what the bug is.
Traceback (most recent call last)…
sqs17 updated
7 months ago
-
Hello,
First of all, thank you for your spectacular work on this repo! It helped me in a lot of projects.
Back story: I would like to scale the training process (training on multiple GPUs) using…
b5y updated
10 months ago
-
It ranks at the top in the hugging-face leaderboard:
https://huggingface.co/spaces/mteb/leaderboard
Here is more info:
https://huggingface.co/hkunlp/instructor-xl
It is a little different.…
-
### System Info
Ubuntu 22.02 on AWS running on a m5.4xlarge instance. The code is running in the context of a Laravel application (specifically a mixin for `Illuminate\Support\Str`), being tested v…
-
Thanks for providing this code. I'd love to use it, but am getting the following error when running the trainer.
```
(py38_test) [richier@reslnapollo02 transformers_ner]$ python bert_crf_trainer.p…
-
Bad documenttaion. not very long errors
Detecting toxicity in outputs generated by Large Language Models (LLMs) is crucial for ensuring that these models produce safe, respectful, and appropriate con…
-
The best finetune result(finetune from the pretrain model you published) I get is 56.62,83.96,90.56 which is still 1.6 lower than your reported result, furthermore, the zero-shot evaluation result fro…