onuratakan / gpt-computer-assistant

gpt-4o for windows, macos and linux
MIT License
4.73k stars 438 forks source link

How to use the local llm with ollama? #136

Open chrishuang758 opened 2 weeks ago

chrishuang758 commented 2 weeks ago

I use GPT-4o is running ok. But when I changed to the local model, I used some error message. EXCEPTION: 'function' object has no attribute 'name' image EXCEPTION: generator raised StopIteration image also occurred EXCEPTION: 'messages'

ollama list : image image

I also modified the relevant files to match my local model. Is this format correct? shown below File llm_settings.py

llm_settings = {
    "gpt-4o": {"vision":True, "transcription":True, "provider":"openai"},
    "gpt-4-turbo": {"vision":False, "transcription":True, "provider":"openai"},
    "gpt-3.5-turbo": {"vision":False, "transcription":True, "provider":"openai"},
    "llama3:8b": {"vision":False, "transcription":False, "provider":"ollama"},
    "llava:13b": {"vision":True, "transcription":False, "provider":"ollama"},
    "qwen2:7b": {"vision":False, "transcription":False, "provider":"ollama"},
    "phi3:14b": {"vision":False, "transcription":False, "provider":"ollama"},
    "codestral:22b": {"vision":False, "transcription":False, "provider":"ollama"},
    "codegemma:7b": {"vision":False, "transcription":False, "provider":"ollama"},
}

llm_show_name = {
    "gpt-4o (OpenAI)": "gpt-4o",
    "gpt-4-turbo (OpenAI)": "gpt-4-turbo",
    "gpt-3.5-turbo (OpenAI)": "gpt-3.5-turbo",
    "llava (Ollama)": "llava:13b",
    "llama3 (Ollama)": "llama3:8b",
    "qwen2 (Ollama)": "qwen2:7b",
    "phi3 (Ollama)": "phi3:14b",
    "codestral (Ollama)": "codestral:22b",
    "codegemma (Ollama)": "codegemma:7b",
}

File llm.py

    model_mapping = {
        # OpenAI
        "gpt-4o": (ChatOpenAI, args_mapping[ChatOpenAI]),
        "gpt-4-turbo": (ChatOpenAI, args_mapping[ChatOpenAI]),
        "gpt-3.5-turbo": (ChatOpenAI, args_mapping[ChatOpenAI]),

        # Google Generative AI - Llama
        "llava:13b": (ChatOllama, args_mapping[ChatOllama]),
        "llama3:8b": (ChatOllama, args_mapping[ChatOllama]),
        "qwen2:7b": (ChatOllama, args_mapping[ChatOllama]),
        "phi3:14b": (ChatOllama, args_mapping[ChatOllama]),
        "codestral:22b": (ChatOllama, args_mapping[ChatOllama]),
        "codegemma:7b": (ChatOllama, args_mapping[ChatOllama])

        # Google Generative AI - Gemini
        #"gemini-pro": (ChatGoogleGenerativeAI, args_mapping[ChatGoogleGenerativeAI]),

        # Groq
        #"mixtral-8x7b-groq": (ChatGroq, args_mapping[ChatGroq])
    }

How can I solve this problem? Has anyone ever encountered such a situation?

Mideky-hub commented 2 weeks ago

I do not remember having tested locally-installed LLMs, here to see with : @onuratakan if he encountered the issue. However from your implementation, it DOES seems right to me, there might be certain details to fill up for completeness of implementation.

Are you sure your LLM is handled by langchain? If so, are you sure the support is handled by the langchain-core? If not, your issue must come from the fact that your model is calling langchain_core.

I though Gemma was handled by Google with langchain-google-vertexai (from langchain_google_vertexai import GemmaVertexAIModelGarden, GemmaChatVertexAIModelGarden) and their method "GemmaVertexAIModelGarden".

You'll find maybe more information about Gemma @ https://ai.google.dev/gemma/docs/integrations/langchain

onuratakan commented 2 weeks ago

Hi it seems there is an small problem about tool infra. Can you update to the latest version. and close the tiger tools.

image

chrishuang758 commented 2 weeks ago

@onuratakan thanks! the ollama is working, the locally-installed LLMs is load, like llama3, phi3, llava, and so on. But the agent‘s response results is EXCEPTION:

MODEL gpt-4o (OpenAI) llama3:8b
MODEL gpt-4-turbo (OpenAI) llama3:8b
MODEL gpt-3.5-turbo (OpenAI) llama3:8b
MODEL Llava (Ollama) llama3:8b
MODEL Llama3 (Ollama) llama3:8b
MODEL Qwen2 (Ollama) llama3:8b
MODEL Phi-3 (Ollama) llama3:8b
MODEL gemini-pro (Google) llama3:8b
MODEL Mixtral 8x7b (Groq) llama3:8b
State updated: thinking
Updating from thread Thinking...
State updated: thinking
Updating from thread Thinking...
LLM INPUT who are you?

> Entering new AgentExecutor chain...
json
{
    "action": "Final Answer",
    "action_input": "I'm Assistant, a large language model trained by OpenAI. I can assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics."
}

> Finished chain.
Error in process_text 'messages'
Traceback (most recent call last):
  File "E:\stial\anaconda3\envs\gpt\Lib\site-packages\gpt_computer_assistant\agent\process.py", line 232, in process_text
    llm_output = assistant(
                 ^^^^^^^^^^
  File "E:\stial\anaconda3\envs\gpt\Lib\site-packages\gpt_computer_assistant\agent\assistant.py", line 214, in assistant
    the_last_messages = msg["messages"]
                        ~~~^^^^^^^^^^^^
KeyError: 'messages'
Updating from thread EXCEPTION: 'messages'
State updated: idle
State updated: thinking
Updating from thread Thinking...
State updated: thinking
Updating from thread Thinking...
LLM INPUT why the sky is blue?

> Entering new AgentExecutor chain...
json
{
    "action": "google",
    "action_input": "Why is the sky blue?"
}

I copied to clipboard.['https://spaceplace.nasa.gov/resources/video-thumbnails/why-is-the-sky-blue.en.jpg?sa=X&ved=2ahUKEwjQ7oPstdyGAxX6ZWwGHTp1A-oQ_B16BAgHEAI', 'https://spaceplace.nasa.gov/blue-sky/', 'https://www.livescience.com/planet-earth/why-is-the-sky-blue', 'https://www.rmg.co.uk/stories/topics/why-sky-blue', 'https://scijinks.gov/blue-sky/', 'https://www.skyatnightmagazine.com/space-science/why-is-the-sky-blue', 'https://www.metoffice.gov.uk/weather/learn-about/weather/optical-effects/why-is-the-sky-blue', 'https://www.reddit.com/r/askscience/comments/14566ig/why_is_the_sky_blue_do_i_understand_it_correctly/', 'https://www.reddit.com/r/explainlikeimfive/comments/xrt34g/eli5_why_is_the_sky_a_different_shade_of_blue_at/', 'https://www.reddit.com/r/explainlikeimfive/comments/l691jf/eli5why_is_the_sky_blue/', 'https://www.reddit.com/r/explainlikeimfive/comments/jgtxi/eli5_why_is_the_sky_blue_no_seriously_like_im/', 'https://www.reddit.com/r/explainlikeimfive/comments/1ahfsxw/eli5_why_is_the_sky_blue_when_space_is_black/', 'https://kids.nationalgeographic.com/books/article/sky', 'https://www.youtube.com/watch?v=ehUIlhKhzDA', 'https://math.ucr.edu/home/baez/physics/General/BlueSky/blue_sky.html', 'https://www.mcgill.ca/oss/article/environment-general-science-you-asked/why-sky-blue-or-better-yet-why-ocean-blue', 'https://www.weather.gov/fgz/SkyBlue', 'https://science.howstuffworks.com/nature/climate-weather/atmospheric/sky.htm', 'https://www.britannica.com/story/why-is-the-sky-blue', 'https://www.scientificamerican.com/article/why-is-the-sky-blue/']```json
{
    "action": "Final Answer",
    "action_input": "The sky appears blue because of a phenomenon called Rayleigh scattering, in which shorter wavelengths of light are scattered more than longer wavelengths by the tiny molecules of gases in the atmosphere. This scattering effect is more pronounced for blue light, resulting in the blue color we see when looking at the sky."
}

> Finished chain.
Error in process_text 'messages'
Traceback (most recent call last):
  File "E:\stial\anaconda3\envs\gpt\Lib\site-packages\gpt_computer_assistant\agent\process.py", line 232, in process_text
    llm_output = assistant(
                 ^^^^^^^^^^
  File "E:\stial\anaconda3\envs\gpt\Lib\site-packages\gpt_computer_assistant\agent\assistant.py", line 214, in assistant
    the_last_messages = msg["messages"]
                        ~~~^^^^^^^^^^^^
KeyError: 'messages'
Updating from thread EXCEPTION: 'messages'
State updated: idle

image