-
### Contact Details
georgelpreput@mailbox.org
### What happened?
Tried to adapt the command running Llava to work with Llama 3.2 (which supposedly also has vision), but couldn't get it to work. Fro…
-
`std testing` module went from deprecated to removed since nushell/nushell#11331 in `0.90.0`. Now requires [nupm](https://github.com/nushell/nupm#test_tube-running-a-test-suite-toc).
-
**Describe the bug**
```
⠙ 0.081 s Starting...The application panicked (crashed).
Message: Failed to start llama-server with command Command { std: "/media/mte90/Doh-cker/tabby_x86_64-manyl…
Mte90 updated
15 hours ago
-
Nightly version:
Release candidate (if any):
OS (Select 1)
- [ ] Windows 11 (online & offline)
- [ ] Ubuntu 24, 22 (online & offline)
- [ ] Mac Silicon OS 14/15 (online & offline)
- [ ] Mac Intel (o…
-
Notes on running llamas and other self-hosted llm models on multiple GPUs
-
I have managed to build latest frequensea on Arch Linux x64 without noticeable problems, but it segfaults upon running:
```
> frequensea ../lua/static.lua
OpenGL Renderer: Mesa DRI Intel(R) Ivybridg…
-
Currently filter applied does not stay after an edit command is given. Given that both list and edit reset the filters, the reset all filters button is not very useful.
Steps to replicate: use a comm…
-
Hi, When I use server-parallel I get an error: updateSlots : failed to decode the batch, n_batch = 1, ret = 1
this is the complete log before the error:
llm_load_tensors: using CUDA for GPU accele…
-
Currently, by referring to your Bitnet documentation, I was able to get the correct result. However, when I tried using the llama-2-7b model, I encountered an issue.
Based on a previous issue #31 , …
-
Turns out that #4733 didn't quite work as intended:
```
$ mkdir -p package/nested
$ touch package/__init__.py
$ touch package/nested/__init__.py
$ mypy -p package -p package.nested
package/neste…