-
```
The `seen_tokens` attribute is deprecated and will be removed in v4.41. Use the `cache_position` model input instead.
Traceback (most recent call last):
File "/home/iiau-vln/miniconda3/envs/M…
-
Traceback (most recent call last):
File "/content/zero_nlp/chatglm_v2_6b_lora/main.py", line 470, in
main()
File "/content/zero_nlp/chatglm_v2_6b_lora/main.py", line 133, in main
mode…
-
### Model Series
Qwen2.5
### What are the models used?
Qwen2.5-0.5B-Instruct
### What is the scenario where the problem happened?
inference with transformers, deployment with vllm/PeftModelForCau…
-
### Describe the bug
I am trying to fine-tune llama2-7b model in GCP. The notebook I am using for this can be found [here](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks…
-
Hey Philipp, thank you for your amazing work !
In the `train-deploy-llm.ipynb` notebook, when running `huggingface_estimator.fit(data, wait=True)`
I just noticed that if we don't enforce dataset…
-
#Facing the following error while trying to finetune Starcoder2 with the given script.
### Description:
For `transformers.AutoModelForCausalLM` to recognize Starcoder2 `transformers>4.39.0` is re…
-
环境:
accelerate 0.34.2
aiohappyeyeballs 2.4.2
aiohttp 3.10.8
aiosignal 1.3.1
annotated-types 0.7.0
anyio 4.6.0
a…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
运行代码到:peft_model = get_peft_model(model, peft_config) 时报错
ValueError: Target module Qu…
-
### Describe the bug
When using enable_model_cpu_offload on StableDiffusionXLPipeline, each consecutive __call__ takes more and more RAM. Also, after deleting pipe not all memory is freed
### Re…
-
### Describe the bug
发生异常: ValueError
Pipeline expected {'tokenizer', 'text_encoder', 'unet', 'vae', 'feature_extractor', 'safety_checker', 'scheduler'}, but only {'tokenizer', 'text_encoder', 'une…