-
-
transformers 版本为:4.31.0
按照train_llm.sh 脚本 qlora微调llama7b模型之后,使用`bash eval_checkpoints.sh llama-7b-lora`,命令测试效果时报错,报错如下
```
llama-7b-lora/checkpoint-100
/data4/caoqian/pretrained_models/llama-2-…
-
Hi, I'm trying to reproduce the results for the repo for the supervised training and am using the following environment:
V100 (p3.2xlage)
Transformers 4.2.1
PyTorch 1.12
CUDA Version: 11.3
Wh…
-
For a lot of configs in https://huggingface.co/datasets/sil-ai/bloom-speech, we get PreviousStepFormatError.
-
Hi guys. I just try to follow the training_stsbenchmark_continue_training.py to build my own model. But my model and these examples both show the error. I did not make any changes to the training_stsb…
-
I'm getting an error when I try to run [training_stsbenchmark_bilstm.py](https://github.com/UKPLab/sentence-transformers/blob/46a149433fe9af0851f7fa6f9bf37b5ffa2c891c/examples/training/avg_word_embedd…
-
This is not a question about tf_hub but about the Universal Sentence Encoder. If this is not the right place, let me know the appropriate forum to post this.
I noticed that the transformer model (U…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Some op failed reporting 'IndentationError: expected an indented block'.
```
2023-02-08T15:…
-
I have used BERT NextSentencePredictor to find similar sentences or similar news, However, It's super slow. Even on Tesla V100 which is the fastest GPU till now. It takes around 10secs for a query tit…
-
### System Info
System Info
- torch==1.8.1+cu101
- transformers==4.10.1
- Python 3.8
- "Ubuntu 18.04.6 LTS"
I am training parallel GPUs and not using pretrained weight. However, during train…