TabbyML / tabby

Self-hosted AI coding assistant
https://tabby.tabbyml.com/
Other
19.61k stars 911 forks source link

AUR package #1246

Open appetrosyan opened 5 months ago

appetrosyan commented 5 months ago

Please describe the feature you want

I want to add an AUR package to install TabbyML on ArchLinux.

Additional context

Installing TabbyML should be straightforward. The docker file contains some of the most basic operations, and in principle there shouldn't be any problems with just adding an AUR package that does the same thing.

The user is satisfied, because the interaction is simpler, the build process doesn't involve knowing what to install, it just installs the right thing.


Please reply with a ๐Ÿ‘ if you want this feature.

wsxiaoys commented 5 months ago

One reason we haven't added Tabby to any Linux registry is that installing CUDA/ROCm is inherently a complex operation. As a result, we prefer that users opt for the Docker approach or install Tabby's binary distribution directly.

cc @boxbeam for comments.

boxbeam commented 5 months ago

This should be totally doable. We could have three separate AUR packages: tabby-cpu, tabby-rocm, and tabby-cuda. Potentially add tabby-metal when that support comes along. From there it should be a simple matter of declaring the proper package dependencies and making a PKGBUILD that either downloads the latest tabby binary or compiles it locally. I don't see any reason to have it install from a docker container.

appetrosyan commented 5 months ago

One reason we haven't added Tabby to any Linux registry is that installing CUDA/ROCm is inherently a complex operation. As a result, we prefer that users opt for the Docker approach or install Tabby's binary distribution directly.

I would argue that as long as they donโ€™t come to this repo to complain, you should be better off with a native package but recommending docker.

As for the former, I can take care of that.

appetrosyan commented 5 months ago

One problem I've run into, is that using the freshly compiled code I can't get a completion to do anything. There's a fundamental difference in how the docker container behaves, and how the bare metal executable does.

I'd have to figure out how to install the thing on bare metal first, and only then produce a package, otherwise it's not a great look.

boxbeam commented 5 months ago

@appetrosyan Could you please tell me what command you used, and what version you have checked out?

appetrosyan commented 5 months ago

Ok, so if I run

cargo run --release -p tabby --  serve --model TabbyML/StarCoder-1B

on 53df532, neither the Emacs (MELPA) nor VSCodium (from the extension store) plugins do anything, even if I trigger completion manually, all it does is wait and time out.

The console looks like this

``` Finished release [optimized] target(s) in 0.91s warning: the following packages contain code that will be rejected by a future version of Rust: nom v4.1.1 note: to see what the problems were, use the option `--future-incompat-report`, or run `cargo report future-incompatibilities --id 1` Running `target/release/tabby serve --model TabbyML/StarCoder-3B` 2024-01-20T18:15:14.504540Z INFO tabby::serve: crates/tabby/src/serve.rs:114: Starting server, this might take a few minutes... 2024-01-20T18:15:17.876450Z INFO tabby::routes: crates/tabby/src/routes/mod.rs:35: Listening at 0.0.0.0:8080 ```

On the other hand, I have had success with

docker run -it -p 8080:8080 -v $HOME/.tabby:/data tabbyml/tabby serve --model TabbyML/StarCoder-1B

I've seen the dockerfile, so probably the extensions are out of date.

With that said, there's nothing in the extensions' UI that indicates anything of that sort, in the sense that the only symptom is that nothing happens on-screen.

boxbeam commented 5 months ago

Interesting, running with the exact same command works fine for me. I'm on 53df532 as well, and ran the exact same command you used. I would like to get to the bottom of why it's not connecting for you. Could you open a bug report and include as many logs and system details as possible?

navr32 commented 3 months ago

Hi ! I try to build and run on Manjaro base on Archlinux and i greatly prefered a build package in aur. So for now i try with the git clone of the project but have errors on cuda detection. The build with cuda ended :

cargo build --features=cuda


tabbyMl/tabby/target/debug/deps/tabby-9ecbfc57a82f790b" "-Wl,--gc-sections" "-pie" "-Wl,-z,relro,-z,now" "-nodefaultlibs"
  = note: /usr/sbin/ld: cannot find -lculibos: No such file or directory
          /usr/sbin/ld: cannot find -lcudart: No such file or directory
          /usr/sbin/ld: cannot find -lcublas: No such file or directory
          /usr/sbin/ld: cannot find -lcublasLt: No such file or directory
          collect2: error: ld returned 1 exit status

error: could not compile `tabby` (bin "tabby") due to 1 previous error

So I have test the cargo build without cuda sucess. After run cargo run serve --model TabbyML/StarCoder-1B

start and give crash :

cargo run serve --model TabbyML/StarCoder-1B
Finished dev [unoptimized + debuginfo] target(s) in 0.47s
     Running `target/debug/tabby serve --model TabbyML/StarCoder-1B`
2024-04-05T17:26:48.033151Z  INFO tabby::serve: crates/tabby/src/serve.rs:123: Starting server, this might take a few minutes...
zsh: illegal hardware instruction (core dumped)  cargo run serve --model TabbyML/StarCoder-1B

But this is i think because my processor have no AVX instruction set support. But even without AVX and build llama-cpp myself without AVX i have llama-cpp perform very well with my Rtx3090 and model fit in Vram..giving more than 30Tok/s .

So this is perhaps without many works to have tabby-ml run well on native arch, manjaro..and so.. Many thanks. Have a nice days.

boxbeam commented 3 months ago

To confirm, you have the CUDA SDK installed and it can't find it? Unsure what would be causing the compiler to emit invalid instructions for your hardware.