-
# Issue Metrics
| Metric | Average | Median | 90th percentile |
| --- | --- | --- | ---: |
| Time to first response | 1 day, 9:01:23 | 0:20:32 | 2 days, 19:33:15 |
| Time to close | 3 days, 9:04:57 |…
-
- [ ] Create philosophical shorts for why LLM may actually "understand"
- [ ] Create a weekly target
- [ ] Reflect on how I would trickle from year to daily vision
- [ ] Create gigs on fastwork
- [ ] …
-
### Your current environment
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Rocky Linux 8.8 (Green Obsidian) (x86_64)
G…
-
System:
```
Ubuntu 22.04 LTS
Intel i5-12600k
32GB DDR4
AMD Radeon RX 6650 XT
```
Manually installed PyTorch 2.0.1 for rocm. Then installed requirements from requirements.txt.
webui boots up …
-
### Your current environment
```text
PyTorch version: 2.1.2+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 12 (bookworm) (x86…
-
### Describe the issue as clearly as possible:
The [Modal example](https://outlines-dev.github.io/outlines/cookbook/deploy-using-modal/) fails if run by a user who has a Modal account but does not ha…
-
### Your current environment
```text
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC ve…
XkunW updated
5 months ago
-
llama.cpp 25/02/2024 "git pull" ~[b2254]
Windows 10 (latest fully updated) i7-4770 16GB ram
Radeon VII 16GB vram
I'm compiling and running miniconda
I can build and run the Vulkan version fine.
H…
-
## 🐛 Bug
I'm facing issue in serving Mistral-7B-Instruct-v0.3 model via mlc_llm serve. I get the below error when performing model serve with "python -m mlc_llm compile ..." command
INFO engine_…
-
### Contact Details
_No response_
### What happened?
I tried to prompt to the mistralai/Mistral-7B-Instruct-v0.2 model by Anyscale virtual key(Without adding the model in anyscale account). It show…