-
Hi @Akintunde102 @AyoOdumark ,
if you're trying to test other LLMs (codellama, wizardcoder, etc.) with GPT-Engineer, we just open-sourced a 1-click proxy to translate openai calls to huggingface, …
-
Hi @liskovich,
if you're trying to test other LLMs (codellama, wizardcoder, etc.) with GPT-Engineer, we just open-sourced a 1-click proxy to translate openai calls to huggingface, anthropic, toget…
-
Is it possible to integrate [StarCoder](https://github.com/bigcode-project/starcoder) as an [LLM Model](https://python.langchain.com/en/latest/modules/models.html) or an [Agent](https://python.langcha…
-
Hello, I'm going to fine-tune on WizardCoder-15B-V1.0 , do I need this code:
```
if "starcoder" in model_args.model_name_or_path:
tokenizer.add_special_tokens(
{
…
-
I want to report what I believe is a bug in tabby-agent.
tabby was started via `tabby serve --model StarCoder-1B --no-webserver`
For testing a LSP client integration, I enabled the regular `text…
-
Hello, I have a few questions about OctoCoder.
For this part in the paper:
> For instruction tuning our models, we select 5,000 random samples from COMMITPACKFT across the 6 programming languages…
-
what was the maximum sequence length used for finetuning starcoder to produce star chat alpha? Was it done on a single GPU card or multiple cards? Please provide insights on the memory requirement…
-
### Before submitting your bug report
- [X] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [X] I'm not able to find an [open issue](ht…
-
I'm attempting to run the Starcoder model on a Mac M2 with 32GB of memory using the Transformers library in a CPU environment. Despite setting load_in_8bit=True, I'm encountering an error during execu…
-
If the user specifies an invalid model or no model at all Tabby could display models that are downloaded. If no models are downloaded it could suggest a simple default like StarCoder-1B
A similar f…