-
Using the latest SHARK-Platform, ran into this error when trying to use it in other projects:
![image](https://github.com/user-attachments/assets/0d2e1002-4ae2-4fda-8860-6803099d75bd)
Might be w…
-
您好!
我下载该模型搭配LLamafactory框架,在做api部署的时候,报以下错误
[2024-10-01 00:15:35,483] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[INFO|configuration_utils.py:670]…
-
Hello,
I finally managed to use a local embedding model (mxbai-embed-large-v1) using new SentenceTransformerEmbeddingModel class (thenks to developper team for this work !!! ;-)).
```
sparse_em…
-
Hi. Thanks for the great work. I tried to prepend and just add the
```
import transformers
from llava.cca_utils.cca import llamaforcausallm_forward, cca_forward
transformers.models.llama.LlamaFo…
-
### Feature request
This request aims to introduce functionality to delete specific adapter layers integrated with PEFT (Parameter-Efficient Fine-Tuning) within the Hugging Face Transformers librar…
-
The code from https://huggingface.co/Salesforce/codegen25-7b-multi_P#causal-sampling-code-autocompletion and https://github.com/salesforce/CodeGen/tree/main/codegen25#sampling does not work currently.…
-
Currently, Chonkie uses sentence-transformers for generating embeddings in semantic chunking. While this works well, FastEmbed offers several advantages that could enhance Chonkie's capabilities:
1…
-
```
Model loaded -> id("mlx-community/phi-2-hf-4bit-mlx")
Error: chatTemplate("No chat template was specified")
```
For models that have a chat template this is fine, but for those that do not:
…
-
### System Info
not relevant here
### Who can help?
@stevhliu
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially suppo…
-
@sayakpaul
Hello,
I am looking for support for saving and loading the flux1.schnell model from Blackforest.
Following your code from the "Bonus" [here](https://github.com/huggingface/blog/blo…