-
this is my prompt
`"Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n ### Instruction:…
-
### Checklist
- [X] 1. I have searched related issues but cannot get the expected help.
- [X] 2. The bug has not been fixed in the latest version.
- [X] 3. Please note that if the bug-related issue y…
-
### Your current environment
The output of `python collect_env.py`
```text
2024-10-09 14:34:40 (641 KB/s) - ‘collect_env.py’ saved [25599/25599]
Collecting environment information...
PyTorc…
-
### Describe the issue
I have a `pyproject.toml` containing this line
```toml
torch = { url = "https://download.pytorch.org/whl/cu118/torch-2.2.1%2Bcu118-cp311-cp311-linux_x86_64.whl" }
``…
-
### Your current environment
The output of `python collect_env.py`
```text
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch…
aqx95 updated
4 hours ago
-
Got an unexpected error while running an example in a VSCode Jupyter Notebook (Python3.10, headless remote Ubuntu 22.04 server). Tried recreating the environment, without results.
Code:
```pytho…
-
### Bug Explanation
`import ivy` hangs.
I managed to nail down the issue to [this](https://github.com/unifyai/ivy/blob/72b5cf227364957b8512156b0f5ecc7df82278a7/ivy/__init__.py#L735) line.
Removin…
-
### Your current environment
The output of `python collect_env.py`
```text
$ python collect_env.py
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
…
-
### Describe the bug
When I try to serve a llama 3.1 8B-4bit with openllm, it says that "This model's maximum context length is 2048 tokens".
On https://huggingface.co/meta-llama/Meta-Llama-3.1-8B,…
-
### 🐛 Describe the bug
I was facing trouble with torch compilation. It works with `backend='eager'` but fails when I try to use `backend='aot_eager'`.
Stacktrace of the error https://gist.github.c…