-
I have gotten Rope scaling working for old GPTQ since it is now in transformers. In AutoGPTQ there is no way to set the transformers config as a parameter and it would have to be added.
I can try t…
-
**Context**
I use Tabby VSCode extension with a local Tabby server.
Currently, when I start VSCode and the Tabby server is not running, it reminds me of that through the yellow indicated extension i…
-
### What is the issue?
Hi!
I'm using ollama as a backend for code completion. I'm running OpenWebUI for authentication, which proxies requests to my ollama instance locally. However, after some re…
-
as we outlined [here](https://github.com/tracehubpm/reports-check-action/issues/49), we should present all the hidden information in `links` and `source` tree in the `text`, so that we can operate wit…
-
Hello! I signed up to download the Code-Llama model from Meta. I received the email with the Unique Custom URL.
**However, when I attempt to download the model, the script throws an error. Can you …
-
https://github.com/OpenDevin/OpenDevin/pull/3196 Issue created by @HenrikBach1
-
```py
(fun) ➜ localpilot git:(main) ✗ python app.py
Running server...
DEBUG:asyncio:Using selector: KqueueSelector
INFO: Started server process [8436]
INFO: Waiting for application start…
-
-
First...great project and thank you for sharing!
I reckon a lot of the issues I am reading about here may be ameliorated with a method to use different prompts. I love the idea of this project and …
-
Hi.
I installed it locally on my M1 and it works in CLI.
When i click on Llama Coder in top right corner (status bar) of VS Code it does nothing.
Sorry for question, maybe its too obvious for me.