-
Currently, huggingface.co is down, and OpenChat cannot be started anymore, even though the model has already been downloaded. Is there a way to start OpenChat without the huggingface check?
-
Dear guys,
I found that the position embeddings are concatenated with the word embeddings in the embedding layer.
https://github.com/openai/finetune-transformer-lm/blob/bd1cf7d678926041e6d19193ca…
-
- https://github.com/huggingface/transformers/issues/13213
-
Hello,
Would you like to support mllm like llava?
```[tasklist]
### Tasks
```
-
Hi. I am getting different embeddings when
1. Python, sentence encoder
```
model_standard = SentenceTransformer("all-MiniLM-L6-v2")
print(model_standard.encode("Hello World"))
```
Outpu…
0110G updated
15 hours ago
-
Hi, thanks for the interesting project!
I create Gemma 7B based model [webbigdata/C3TR-Adapter](https://huggingface.co/webbigdata/C3TR-Adapter).
This model is Huggingface transformer format and …
-
### System Info
- `transformers` version: 4.42.2
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.10.14
- Huggingface_hub version: 0.23.4
- Safetensors version: 0.4.2
- Accelerate ver…
-
[X] I have checked the [documentation](https://docs.ragas.io/) and related resources and couldn't resolve my bug.
**Describe the bug**
I have a locally hosted LLM which I am intending to use as a jud…
-
InstructLab 0.13 supports hardware acceleration for Apple Silicon (via `mlx`) and CUDA-like GPUs (NVIDIA CUDA and AMD ROCm via `torch.cuda`). I would like to add support for Intel Gaudi 2 hardware and…
tiran updated
2 months ago
-
I would like to be able to load a LED model into huggingface via e.g.
```led = LEDForConditionalGeneration.from_pretrained('PATH/longformer-encdec-large-16384', gradient_checkpointing=True, use_cac…