Pythagora-io / gpt-pilot

The first real AI developer
Other
29.22k stars 2.92k forks source link

ollama #404

Closed MikeyBeez closed 3 months ago

MikeyBeez commented 7 months ago

Is there a way to connect to a model this way? This code is so easy, but I don't know where to insert it.

from langchain.llms import Ollama ollama = Ollama(base_url='http://localhost:11434', model="llama2") response = ollama(prompt)

tchr-dev commented 7 months ago

I second this

If you'd like you can try to do it here:

def stream_gpt_completion(data, req_type, project)

This function is in the gpt-pilot/pilot/utils/llm_connection.py

Also you should change .env file and constants in llm.py for correct version

senko commented 7 months ago

While not exposed in the VSCode extension at the moment, there's a way to use local LLMs with a server such as LM Studio or LiteLLM, here's a short tutorial: Using GPT Pilot with local LLMs

ozenhaluk commented 7 months ago

as to my experience , not trying to be a buzz killer :) , for the people trying with Local LLM's , that the expected results (very normally) changes a bit on different models.for instance with mistral seemed to be having problems to proceed next steps , probably because of the context window or codellama seems to be missing (as it would expected) to the instructions well.

aidanbha79 commented 6 months ago

Hi, when using the LM Studio as a backend I can't seem to get past the Project Manager prompts, it keeps asking for questions but never starts the architecture or the development just the User Stories. Any idea how this can be achieved? also, what is the best model to use?

techjeylabs commented 3 months ago

closing this after inacitivy