-
### Your current environment
```text
The output of `python collect_env.py`
```
### 🐛 Describe the bug
My expectation is that the model should properly load the language portion of the model int…
-
### System Info
`transformers` version: 4.32.0
- Platform: Linux-5.19.0-38-generic-x86_64-with-glibc2.35
- Python version: 3.10.9
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
…
-
### System Info
```shell
Platform:
- Platform: Linux-5.15.0-1056-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
Python packages:
- `optimum-neuron` version: 0.0.23
- `neuron-sdk` …
-
### Describe the bug
Connecting to OpenLLM isn't an issue from the look of it, but actually using it from langchain is
### To reproduce
# code
from langchain_community.llms import OpenLLM
print…
-
Mozilla has announced a new file format like the modelfile but compiled to a single executable. Are there any plans to support it?
https://github.com/Mozilla-Ocho/llamafile
-
I am facing an error when attempting to utilize multiple GPUs with a FastAPI backend. The error arises during the integration of the multiple GPU code into the FastAPI backend API. Interestingly, the …
-
### Your current environment
```text
Collecting environment information...
PyTorch version: 2.2.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
…
-
### 软件环境
```Markdown
- paddlepaddle:
- paddlepaddle-gpu: 0.0.0.post120
- paddlenlp: 2.7.1.post0
```
### 重复问题
- [X] I have searched the existing issues
### 错误描述
```Markdown
尝试使用lora的方式精调PaddleN…
-
### Reminder
- [X] I have read the README and searched the existing issues.
### Reproduction
#!/bin/bash
pip install "transformers>=4.39.1"
pip install "accelerate>=0.28.0"
pip install "bitsan…
hhtao updated
4 months ago
-
### What happened + What you expected to happen
example.py
```python
from vllm import LLM, SamplingParams
prompts = [
"Hello, my name is",
"The president of the United States is",
…