Open johndpope opened 1 year ago
I'm also interested in this. I have CodeLlama running locally with LocalAI.
If only Cursor had a way to change the baseURL
, as in this NodeJS example:
import OpenAI from "openai"; // v4
const content = `
Please write JavaScript code that creates
a scatter plot with D3.js.
Use \`const\` and \`let\` instead of \`var\`.
Use the arrow function syntax.
## JavaScript code
`;
const openai = new OpenAI({
apiKey: "",
baseURL: "http://localhost:8080/v1"
});
const stream = await openai.chat.completions.create({
// model: "llama-2-7b-chat.ggmlv3.q4_0.bin",
model: "codellama-7b.Q4_K_M.gguf",
messages: [{ role: "user", content }],
stream: true,
});
for await (const part of stream) {
process.stdout.write(part.choices[0].delta.content);
}
doesnt help - but it's interesting digging around github search results https://github.com/search?q=import+OpenAI+codellama+34B&type=code
maybe this could help https://github.com/xusenlinzy/api-for-open-llm
(using google translate) https://github.com/xusenlinzy/api-for-open-llm/tree/master/examples/code-llama
from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage, SystemMessage
llm = ChatOpenAI(
model_name="code-llama",
openai_api_base="http://192.168.0.53:7891/v1",
openai_api_key="xxx",
)
when will we be getting support for this? this should be really easy and we should be able to plug it into Azure API, however there is an error saying "invalid credentials. Please try again." I noticed the issue is when using http not https, still didn't get https to work locally
looks like only open-interpreter https://github.com/KillianLucas/open-interpreter/ can handle this for now.
catch headline https://www.youtube.com/watch?v=SqnXUHwIa3c
UPDATE 4 mins in - guy says he can't get it working locally....
This was discussed in a thread on our Discord server.
Regarding using other AI models:
Regarding Localhost:
Just allow the user to change the URL and put a big warning There is NO harm in that and it's easy to implement
Your reasons don't make sense Always give the POWER TO THE USER Then polish it later
Would love to see / support this. I know of a number of devs that would pay for self-hosted code completion inside their editor but can't use Cursor with OpenAI as the backend.
New GitHub copilot will allow selfhosting of models and allows chat on selections of code
get fucked
We've been asking for the simplest feature vut you keep trying to monetize and claim being community-based
I fully support this. Apparently, you need to create an open source code editor similar to a cursor. These restrictions hinder progress.
New (potential) Cursor user here 👋 ,
After installing Cursor and importing some of my most used VSCode plugins - the very first thing I went to change was to set Cursor to use either my Ollama or TabbyAPI LLM server.
I was quite surprised to see there weren't native options for Ollama and the only OpenAI compatible option was to override the base URL which feels a bit all-or-nothing and doesn't auto-populate the model list with you available models.
There's really big advantages in being able to easily use local LLMs especially if you're already running them for multiple other tasks:
For example, DeepSeek-Coder-V2 and Codestral are two models that are really fantastic, between those two I get better quality multi-shot code generation than I get from GPT4o more than 50% of the time.
In VSCode continue.dev and Tabby have pretty decent integration with both Ollama and OpenAI compatible API endpoints as first party citizens, but their extension features are not as nicely integrated into the IDE as Cursor.
By comparison when I added either my local Ollama OpenAI compatible API endpoint to Cursor and manually added the models I mostly use it seems Cursor just errors with:
What I don't understand is, that the community is not even asking about open sourcing the whole software (unrelevant of it would make sense or not), we are just asking to use own hosted Ai so we don't have to pay, like some persons cant/or don't want to pay the money for it. You could even make a big popup banner with the information that you are not guarante that using an own hosted Ai have the same performance or quality. And also for some company's these is a BIG nogo to using servers from some company they can't oversee.... So it would even open up these application to new customers to enable these feature. So all in all it make no sense to not allowing it not even in the monetary part... And you can even use specific licenses to disallow company use without having a license, but they still could use there own local AI....
What I don't understand is, that the community is not even asking about open sourcing the whole software (unrelevant of it would make sense or not), we are just asking to use own hosted Ai so we don't have to pay, like some persons cant/or don't want to pay the money for it. You could even make a big popup banner with the information that you are not guarante that using an own hosted Ai have the same performance or quality. And also for some company's these is a BIG nogo to using servers from some company they can't oversee.... So it would even open up these application to new customers to enable these feature. So all in all it make no sense to not allowing it not even in the monetary part... And you can even use specific licenses to disallow company use without having a license, but they still could use there own local AI....
Have you tried aider yet? I love Cursor and would be happy to pay for it if I had cash - but aider is just as good and also open source. It allows you to configure your own LLM's including local and free options. Also it has git integration (auto commit, undo) which I haven't seen how to do easily on Cursor yet.
Whoah TIL https://github.com/paul-gauthier/aider
If you want an IDE similar to Cursor there is Zed which supports multiple different providers including Ollama. See config https://zed.dev/docs/assistant/configuration.
https://brandolosaria.medium.com/setting-up-metaais-code-llama-34b-instruct-model-fc009aa937f6
https://github.com/go-skynet/LocalAI