OpenGenerativeAI / llm-colosseum

Benchmark LLMs by fighting in Street Fighter 3! The new way to evaluate the quality of an LLM
https://huggingface.co/spaces/junior-labs/llm-colosseum
MIT License
1.34k stars 160 forks source link

can not run local model #50

Closed mengxiyou closed 7 months ago

mengxiyou commented 7 months ago

I installed ollama and follow the README.md. But can not run local model such as default option ollama:mistral. error log: Exception in thread Thread-5: Traceback (most recent call last): File "C:\Users\86138\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1009, in _bootstrap_inner self.run() File "D:\AI\llm-colosseum-git\eval\game.py", line 369, in run self.run() File "D:\AI\llm-colosseum-git\eval\game.py", line 383, in run self.game.player_1.robot.plan() File "D:\AI\llm-colosseum-git\agent\robot.py", line 134, in plan self.game.player_2.robot.plan() next_steps_from_llm = self.get_moves_from_llm() File "D:\AI\llm-colosseum-git\agent\robot.py", line 134, in plan File "D:\AI\llm-colosseum-git\agent\robot.py", line 293, in get_moves_from_llm next_steps_from_llm = self.get_moves_from_llm() File "D:\AI\llm-colosseum-git\agent\robot.py", line 293, in get_moves_from_llm llm_stream = self.call_llm() File "D:\AI\llm-colosseum-git\agent\robot.py", line 369, in call_llm llm_stream = self.call_llm() File "D:\AI\llm-colosseum-git\agent\robot.py", line 369, in call_llm resp = client.stream_chat(messages) AttributeError: 'NoneType' object has no attribute 'stream_chat' resp = client.stream_chat(messages) AttributeError: 'NoneType' object has no attribute 'stream_chat'

It seems robot.py:363 client = get_client(self.model) didn't get a valid model. I checked the llm.py and in function get_client, there seems no branch to deal with "ollama:mistral" (provider is ollama)? I wondered how the local model be created? Or I made some mistake?

if provider == "openai":
    from llama_index.llms.openai import OpenAI

    return OpenAI(model=model_name)
elif provider == "anthropic":
    from llama_index.llms.anthropic import Anthropic

    return Anthropic(model=model_name)
elif provider == "mixtral" or provider == "groq":
    from llama_index.llms.groq import Groq

    return Groq(model=model_name)
kaveerh commented 7 months ago

+1

vasiliyeskin commented 7 months ago

I installed ollama and follow the README.md. But can not run local model such as default option ollama:mistral. error log: Exception in thread Thread-5: Traceback (most recent call last): File "C:\Users\86138\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1009, in _bootstrap_inner self.run() File "D:\AI\llm-colosseum-git\eval\game.py", line 369, in run self.run() File "D:\AI\llm-colosseum-git\eval\game.py", line 383, in run self.game.player_1.robot.plan() File "D:\AI\llm-colosseum-git\agent\robot.py", line 134, in plan self.game.player_2.robot.plan() next_steps_from_llm = self.get_moves_from_llm() File "D:\AI\llm-colosseum-git\agent\robot.py", line 134, in plan File "D:\AI\llm-colosseum-git\agent\robot.py", line 293, in get_moves_from_llm next_steps_from_llm = self.get_moves_from_llm() File "D:\AI\llm-colosseum-git\agent\robot.py", line 293, in get_moves_from_llm llm_stream = self.call_llm() File "D:\AI\llm-colosseum-git\agent\robot.py", line 369, in call_llm llm_stream = self.call_llm() File "D:\AI\llm-colosseum-git\agent\robot.py", line 369, in call_llm resp = client.stream_chat(messages) AttributeError: 'NoneType' object has no attribute 'stream_chat' resp = client.stream_chat(messages) AttributeError: 'NoneType' object has no attribute 'stream_chat'

It seems robot.py:363 client = get_client(self.model) didn't get a valid model. I checked the llm.py and in function get_client, there seems no branch to deal with "ollama:mistral" (provider is ollama)? I wondered how the local model be created? Or I made some mistake?

if provider == "openai":
    from llama_index.llms.openai import OpenAI

    return OpenAI(model=model_name)
elif provider == "anthropic":
    from llama_index.llms.anthropic import Anthropic

    return Anthropic(model=model_name)
elif provider == "mixtral" or provider == "groq":
    from llama_index.llms.groq import Groq

    return Groq(model=model_name)

Add to end of llm.py

elif provider == 'ollama':
    from llama_index.llms.ollama import Ollama # need install: pip install llama-index-llms-ollama

    return Ollama(model=model_name, request_timeout=90.0)
SamPink commented 7 months ago

Yes this fix is correct. It was my mistake. Llama needs to be added back as an option using llama index

vasiliyeskin commented 7 months ago

Yes this fix is correct. It was my mistake. Llama needs to be added back as an option using llama index

I ask you to fix the bug and close the issue

SamPink commented 7 months ago

Yes this fix is correct. It was my mistake. Llama needs to be added back as an option using llama index

I ask you to fix the bug and close the issue

I have submitted a PR for this, waiting for it to be accepted

oulianov commented 7 months ago

Thank you @SamPink for fixing the issue!

taozhiyuai commented 7 months ago

Thank you @SamPink for fixing the issue!

Done! please merge it

oulianov commented 7 months ago

@taozhiyuai this was merged last week : https://github.com/OpenGenerativeAI/llm-colosseum/pull/51