filip-michalsky / SalesGPT

Context-aware AI Sales Agent to automate sales outreach.
https://salesgpt.vercel.app
MIT License
2.05k stars 453 forks source link

Integration with Ollama #54

Open dantodor opened 1 year ago

dantodor commented 1 year ago

Hello! I'm trying to use SalesGPT with a locally-served model via Ollama. I tested the LiteLLM part, it works.

from litellm import completion

response = completion(
    model="orca-mini", 
    messages=[{ "content": "respond in 20 words. who are you?","role": "user"}], 
    api_base="http://localhost:11434", 
    custom_llm_provider="ollama"
)
print(response)

Now, I modified the streaming_generator_example.py like this

llm = ChatLiteLLM(model="orca-mini",
    api_base="http://localhost:11434",
    custom_llm_provider="ollama")
....
generator = sales_agent.step(
     return_streaming_generator=True, model_name="orca-mini",
)

Btw, I think it's a little bit redundant the need to provide the model name at each step instead of setting it up in the llm model. Why is that needed at each step?

The result is :

salesgpt.logger 2023-09-14 07:10:55,377 - INFO - Running from_llm: --- 8.654594421386719e-05 seconds ---
salesgpt.logger 2023-09-14 07:10:55,378 - INFO - Running from_llm: --- 0.00012731552124023438 seconds ---
salesgpt.logger 2023-09-14 07:10:55,378 - INFO - Running from_llm: --- 0.0005018711090087891 seconds ---
salesgpt.logger 2023-09-14 07:10:55,378 - INFO - Running seed_agent: --- 5.7220458984375e-06 seconds ---
salesgpt.logger 2023-09-14 07:10:55,378 - INFO - Running _prep_messages: --- 0.0001068115234375 seconds ---
Traceback (most recent call last):
  File "/opt/ollama/SalesGPT/examples/streaming_generator_example.py", line 36, in <module>
    generator = sales_agent.step(
  File "/root/miniconda3/envs/sls/lib/python3.10/site-packages/salesgpt/logger.py", line 34, in wrapper
    result = func(*args, **kwargs)  # Function execution
  File "/root/miniconda3/envs/sls/lib/python3.10/site-packages/salesgpt/agents.py", line 110, in step
    return self._streaming_generator(model_name=model_name)
  File "/root/miniconda3/envs/sls/lib/python3.10/site-packages/salesgpt/logger.py", line 34, in wrapper
    result = func(*args, **kwargs)  # Function execution
  File "/root/miniconda3/envs/sls/lib/python3.10/site-packages/salesgpt/agents.py", line 179, in _streaming_generator
    return self.sales_conversation_utterance_chain.llm.completion_with_retry(
  File "/root/miniconda3/envs/sls/lib/python3.10/site-packages/langchain/chat_models/litellm.py", line 263, in completion_with_retry
    return _completion_with_retry(**kwargs)

... (long trace not included)

File "/root/miniconda3/envs/sls/lib/python3.10/site-packages/litellm/utils.py", line 998, in get_llm_provider
    raise ValueError(f"LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/{model}',..)` Learn more: https://docs.litellm.ai/docs/providers")
ValueError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/orca-mini',..)` Learn more: https://docs.litellm.ai/docs/providers

From my limited understanding, I get it that somehow the parameters set up in the llm at the initialization step are lost at the step stage, i.e. api_base and custom_llm_provider. Which kinda negates the whole purpose of integrating LiteLLM, if the only one I can use is OpenAI.

What am I doing wrong? Has anyone successfully ran local ollama?

ishaan-jaff commented 1 year ago

@dantodor I'm the maintainer of LiteLLM, it looks like the normal LiteLLM call worked for you but the ChatLiteLLM call did not work ?

ishaan-jaff commented 1 year ago

Can we verify if a regular ChatLiteLLM call works for you:

Can you try it like this ?

llm = ChatLiteLLM(
model="ollama/orca-mini",
api_base="http://localhost:11434",
)
dantodor commented 1 year ago

@ishaan-jaff Sorry for the delay. I modified the test, it behaves almost the same, the difference being that the error is being thrown in litellm:

salesgpt.logger 2023-09-26 11:00:45,599 - INFO - Running from_llm: --- 0.00010800361633300781 seconds ---
salesgpt.logger 2023-09-26 11:00:45,599 - INFO - Running from_llm: --- 5.459785461425781e-05 seconds ---
salesgpt.logger 2023-09-26 11:00:45,599 - INFO - Running from_llm: --- 0.0003409385681152344 seconds ---
salesgpt.logger 2023-09-26 11:00:45,599 - INFO - Running seed_agent: --- 3.814697265625e-06 seconds ---
salesgpt.logger 2023-09-26 11:00:45,599 - INFO - Running _prep_messages: --- 4.982948303222656e-05 seconds ---
salesgpt.logger 2023-09-26 11:00:45,600 - INFO - Running _streaming_generator: --- 0.0006957054138183594 seconds ---
salesgpt.logger 2023-09-26 11:00:45,600 - INFO - Running step: --- 0.0007891654968261719 seconds ---
Traceback (most recent call last):
  File "/Users/dantodor/work/ab/SalesGPT/examples/ollama.py", line 42, in <module>
    for chunk in generator:
  File "/Users/dantodor/miniforge3/envs/sales/lib/python3.10/site-packages/litellm/llms/ollama.py", line 23, in get_ollama_response_stream
    with session.post(url, json=data, stream=True) as resp:

...

File "/Users/dantodor/miniforge3/envs/sales/lib/python3.10/site-packages/requests/models.py", line 439, in prepare_url
    raise MissingSchema(
requests.exceptions.MissingSchema: Invalid URL 'None/api/generate': No scheme supplied. Perhaps you meant https://None/api/generate?

Which loops back to my initial comment. Somehow, the initial setup of the LLM via litellm is ignored in the call to step().

Did you tried it by yourself and it somehow worked?

dantodor commented 1 year ago

Okay, this is a little weird. I was looking in the tests. So I noticed these lines:

sales_agent.determine_conversation_stage()  # optional for demonstration, built into the prompt

So, in the streaming demo, I added exactly that line following the sales_agent.seed_agent() line. Now the results are:

salesgpt.logger 2023-09-26 11:39:13,446 - INFO - Running from_llm: --- 0.00013375282287597656 seconds ---
salesgpt.logger 2023-09-26 11:39:13,446 - INFO - Running from_llm: --- 6.794929504394531e-05 seconds ---
salesgpt.logger 2023-09-26 11:39:13,446 - INFO - Running from_llm: --- 0.0004940032958984375 seconds ---
salesgpt.logger 2023-09-26 11:39:13,446 - INFO - Running seed_agent: --- 5.9604644775390625e-06 seconds ---
Conversation Stage ID:  Based on the given conversation history, the next immediate conversation stage for the agent in the sales conversation would be qualification.
Conversation Stage: 1
salesgpt.logger 2023-09-26 11:39:22,194 - INFO - Running determine_conversation_stage: --- 8.747308015823364 seconds ---
salesgpt.logger 2023-09-26 11:39:22,194 - INFO - Running _prep_messages: --- 0.00023865699768066406 seconds ---
salesgpt.logger 2023-09-26 11:39:22,196 - INFO - Running _streaming_generator: --- 0.0018270015716552734 seconds ---
salesgpt.logger 2023-09-26 11:39:22,196 - INFO - Running step: --- 0.0020742416381835938 seconds ---
{'choices': [{'delta': {'role': 'assistant', 'content': ' Hey'}}]}
{'choices': [{'delta': {'role': 'assistant', 'content': ','}}]}
{'choices': [{'delta': {'role': 'assistant', 'content': ' how'}}]}
{'choices': [{'delta': {'role': 'assistant', 'content': ' are'}}]}
{'choices': [{'delta': {'role': 'assistant', 'content': ' you'}}]}
{'choices': [{'delta': {'role': 'assistant', 'content': ' doing'}}]}
{'choices': [{'delta': {'role': 'assistant', 'content': ' this'}}]}
{'choices': [{'delta': {'role': 'assistant', 'content': ' morning'}}]}
{'choices': [{'delta': {'role': 'assistant', 'content': '?'}}]}
{'choices': [{'delta': {'role': 'assistant', 'content': ' I'}}]}
{'choices': [{'delta': {'role': 'assistant', 'content': ' am'}}]}
{'choices': [{'delta': {'role': 'assistant', 'content': ' calling'}}]}
{'choices': [{'delta': {'role': 'assistant', 'content': ' from'}}]}
{'choices': [{'delta': {'role': 'assistant', 'content': ' Sleep'}}]}
{'choices': [{'delta': {'role': 'assistant', 'content': ' Haven'}}]}
{'choices': [{'delta': {'role': 'assistant', 'content': '.'}}]}
{'choices': [{'delta': {'role': 'assistant', 'content': ' Can'}}]}
{'choices': [{'delta': {'role': 'assistant', 'content': ' you'}}]}
{'choices': [{'delta': {'role': 'assistant', 'content': ' please'}}]}
{'choices': [{'delta': {'role': 'assistant', 'content': ' explain'}}]}
{'choices': [{'delta': {'role': 'assistant', 'content': ' to'}}]}
{'choices': [{'delta': {'role': 'assistant', 'content': ' me'}}]}
{'choices': [{'delta': {'role': 'assistant', 'content': ' why'}}]}
{'choices': [{'delta': {'role': 'assistant', 'content': ' you'}}]}
{'choices': [{'delta': {'role': 'assistant', 'content': ' need'}}]}
{'choices': [{'delta': {'role': 'assistant', 'content': ' home'}}]}
{'choices': [{'delta': {'role': 'assistant', 'content': ' insurance'}}]}
{'choices': [{'delta': {'role': 'assistant', 'content': '?'}}]}

I'm not commenting on the wrong utterance from the model, as I'm using orca-mini for testing, and I'm not expecting it to work. However, what I can infer from the above is that determine_converation_stage is not exactly optional?

krrishdholakia commented 1 year ago

Hey @dantodor were you able to get this to work?

dantodor commented 1 year ago

@krrishdholakia as you can see above, it works, under a prescribed order of doing the calls.

krrishdholakia commented 1 year ago

cc: @filip-michalsky why do they have to pass model name multiple times?

Screenshot 2023-09-26 at 6 57 11 PM
filip-michalsky commented 1 year ago

@krrishdholakia @dantodor @ishaan-jaff thanks for raising this. Looks like SalesGPT needs to get some upgrades after lite-llm / langchain recent version bumps. looking into it and will revert, thx!

filip-michalsky commented 1 year ago

@dantodor please check the new version works for your ollama model?

dantodor commented 1 year ago

Will do and let you know ASAP @filip-michalsky

snnahs1 commented 8 months ago

Did you get it to work?