Closed MikeyBeez closed 3 months ago
I second this
If you'd like you can try to do it here:
def stream_gpt_completion(data, req_type, project)
This function is in the gpt-pilot/pilot/utils/llm_connection.py
Also you should change .env file and constants in llm.py for correct version
While not exposed in the VSCode extension at the moment, there's a way to use local LLMs with a server such as LM Studio or LiteLLM, here's a short tutorial: Using GPT Pilot with local LLMs
as to my experience , not trying to be a buzz killer :) , for the people trying with Local LLM's , that the expected results (very normally) changes a bit on different models.for instance with mistral seemed to be having problems to proceed next steps , probably because of the context window or codellama seems to be missing (as it would expected) to the instructions well.
Hi, when using the LM Studio as a backend I can't seem to get past the Project Manager prompts, it keeps asking for questions but never starts the architecture or the development just the User Stories. Any idea how this can be achieved? also, what is the best model to use?
closing this after inacitivy
Is there a way to connect to a model this way? This code is so easy, but I don't know where to insert it.
from langchain.llms import Ollama ollama = Ollama(base_url='http://localhost:11434', model="llama2") response = ollama(prompt)