Closed DarpanGanatra closed 1 year ago
Hey @DarpanGanatra! You can use the Hugging Face Pipelines Prompt Driver to run inference against local models. Are you looking for a Prompt Driver for LM Studio specifically?
I see, I didn't look into that before I raised this, I'll close. Thank you @collindutter !
Sounds good! Feel free to re-open if you face any issues with that driver 🙂
I know it is closed, but if anyone still interested, griptape works with LM Studio
Example
import logging
os.environ["OPENAI_API_KEY"] = "NULL"
os.environ["OPENAI_BASE_URL"] = "http://localhost:1234/v1"
Set the logging level to ERROR, this will provide a much cleaner output and will only display logs of the highest priority
agent = Agent(
logger_level=logging.ERROR,
)
Function, passing the prompt and the agent and returns the agent output
def chat(prompt, agent):
agent_response = agent.run(prompt)
return agent_response.output_task.output.value
Then you can just call the function
for example (I use streamlit)
prompt = st.chat_input("Question")
if prompt:
st.chat_message("user").write(prompt)
with st.spinner("Thinking"):
response = chat(prompt=prompt, agent=agent)
st.write(response)
Without streamlit
prompt= input("Question: ")
if prompt:
answer =chat(prompt=prompt, agent=agent)
print(answer)
Hope it helps
I would like the ability to use local models as "Prompt Drivers". Simple example is using LMStudio's Local Inference Server option where I can have a model deployed to an endpoint and call it using (currently) LangChain by altering the
base_path
of an OpenAI call.Here's the example client requested provided:
Describe alternatives you've considered I've attempted to do this with the method described above:
But I get:
Please let me know if any other information is needed.