-
**What problem or use case are you trying to solve?**
Currently OpenDevin somewhat works with the strongest closed LLMs such as GPT-4 or Claude Opus, but we have not confirmed good results with ope…
-
I am trying to run Starcoder locally through Ollama. And I want to get code auto-completion like in the README gif.
But I keep getting the following error after every debounce: `[LLM] inference api…
V4G4X updated
5 months ago
-
Hi, Can you give some advice about how to inference finetuned Starcoder model with this code? Since lora finetune changed some of layers of the model, some of the code in starcoder.cpp should be chang…
-
I am exploring the possibility of using StarCoder to generate embeddings for code tokens and would like to know if this is feasible with the current implementation.
### Questions:
1. Is it possib…
-
Hi!
Curious to know some more details about FIM and its effect on the pre-trained model.
Here's a paragraph from the SantaCoder paper:
> FIM for cheap
We observe a minor drop in performance of…
-
When I activate the local execution, I get the following error message:
`
ValueError: The current "device_map" had weights offloaded to the disk. Please provide an "offload_folder" for them. Alterna…
-
======================
=== EXAMPLE 6 ===
Implement a program to find the common elements in two arrays without using any extra data structures.
You can use Python's built-in set datatyp…
-
**Description:** We need to enhance Cursor IDE by implementing support for local AI models using Ollama, similar to the Continue extension for VS Code. This will enable developers to use AI-powered co…
-
Hi. Thank you for your great work. You approach is helpful to me. I am trying to fine-tune starcoder to enhance its C code performance. So your cost of fine-tune starcoder is helpful to me. Could you …
-
Currently building StarCoder GPT variant fails if smooth quant applied.
Is it planned to be supported? Any advice on how to build it?
ttim updated
12 months ago