-
I am trying to build retry, and I ended up with a test failure
```
policy transformers
always produces positive delay with positive constants (no rollover): FAIL (0.69s)
…
-
I am trying to convert my local model to CoreML and encountered an error after converting the TextEncoder:
```
ERROR - converting 'scaled_dot_product_attention' op (located at: 'text_encoder/text_…
-
在处理好了PG19数据之后,进行训练,一直发现有问题
[WARNING|logging.py:329] 2024-05-14 16:24:22,784 >> LlamaModel is using LlamaSdpaAttention, but `torch.nn.functional.scaled_dot_product_attention` does not support `output_…
-
### System Info
- transformers version: 4.43.3
- Platform: Linux-5.15.0-113-generic-x86_64-with-glibc2.35
- Python version: 3.11.7
- Huggingface_hub version: 0.24.5
- Safetensors version: 0.4.3…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
At this moment llama-index-postprocessor-colbert-rerank import requires torch and his nv…
-
### System Info
- `transformers` version: 4.44.2
- Platform: Linux-5.19.0-0_fbk12_hardened_11583_g0bef9520ca2b-x86_64-with-glibc2.34
- Python version: 3.10.14
- Huggingface_hub version: 0.24.6
- …
-
pip install -r requirements.txt
Collecting accelerate==0.24.1 (from -r requirements.txt (line 1))
Downloading accelerate-0.24.1-py3-none-any.whl.metadata (18 kB)
Collecting aiohttp==3.9.0 (fr…
-
bash scripts/forward.sh in Llama-2-7b-chat-hf
Loading checkpoint shards: 0%| | 0/2 [00:00
-
I have an obfuscated jar. I've been able to successfully deobfuscate a lot of it but I am running into an issue. When I run the deobfuscator without any transformers the jar still stops working. I've …
-
**The bug**
Could you add support or provide some guidance (pun intended) so I can add support to the family of T5 model ?
**To Reproduce**
```python
import guidance
model_id = 'google/flan-t5-…