jgravelle / AutoGroq

AutoGroq is a groundbreaking tool that revolutionizes the way users interact with Autogen™ and other AI assistants. By dynamically generating tailored teams of AI agents based on your project requirements, AutoGroq eliminates the need for manual configuration and allows you to tackle any question, problem, or project with ease and efficiency.
https://autogroq.streamlit.app/
1.21k stars 423 forks source link

LMStudio unexpected keyword argument 'api_key' #35

Closed jsarsoun closed 1 month ago

jsarsoun commented 1 month ago

st.experimental_rerun will be removed after 2024-04-01. Debug: Handling user request for session state: {'discussion': '', 'rephrased_request': '', 'api_key': '', 'agents': [], 'whiteboard': '', 'reset_button': False, 'uploaded_data': None, 'model_selection': 'xtuner/llava-llama-3-8b-v1_1-gguf', 'current_project': <current_project.Current_Project object at 0x000002B9215389D0>, 'max_tokens': 2048, 'LMSTUDIO_API_KEY': 'lm-studio', 'skill_functions': {'execute_powershell_command': <function execute_powershell_command at 0x000002B921CD8220>, 'fetch_web_content': <function fetch_web_content at 0x000002B9210F5300>, 'generate_sd_images': <function generate_sd_images at 0x000002B921CCDEE0>, 'get_weather': <function get_weather at 0x000002B921CD85E0>, 'save_file_to_disk': <function save_file_to_disk at 0x000002B9238EAB60>}, 'selected_skills': [], 'autogen_zip_buffer': None, 'show_request_input': True, 'discussion_history': '', 'rephrased_request_area': '', 'crewai_zip_buffer': None, 'temperature': 0.3, 'previous_user_request': 'what is an llm', 'model': 'xtuner/llava-llama-3-8b-v1_1-gguf', 'skill_name': None, 'last_agent': '', 'last_comment': '', 'skill_request': '', 'user_request': 'what does 1 + 1 equal?', 'user_input': '', 'reference_html': {}, 'reference_url': ''} Debug: Sending request to rephrase_prompt Debug: Model: xtuner/llava-llama-3-8b-v1_1-gguf Executing rephrase_prompt() Error occurred in handle_user_request: LmstudioProvider.init() got an unexpected keyword argument 'api_key'

User-specific configurations

LLM_PROVIDER = "lmstudio" GROQ_API_URL = "https://api.groq.com/openai/v1/chat/completions" LMSTUDIO_API_URL = "http://localhost:1234/v1/chat/completions" OLLAMA_API_URL = "http://127.0.0.1:11434/api/generate" OPENAI_API_KEY = "your_openai_api_key" OPENAI_API_URL = "https://api.openai.com/v1/chat/completions"

elif LLM_PROVIDER == "lmstudio": API_URL = LMSTUDIO_API_URL MODEL_TOKEN_LIMITS = { 'xtuner/llava-llama-3-8b-v1_1-gguf': 2048, }

MODEL_CHOICES = { 'default': None, 'gemma-7b-it': 8192, 'gpt-4o': 4096, 'xtuner/llava-llama-3-8b-v1_1-gguf': 2048, 'llama3': 8192, 'llama3-70b-8192': 8192, 'llama3-8b-8192': 8192, 'mixtral-8x7b-32768': 32768 }

jsarsoun commented 1 month ago

Fixed by adding

class LmstudioProvider(BaseLLMProvider): def init(self, api_url, api_key=None):

Shake-Shifter commented 1 month ago

So I'm having the same issue here, I added the code you suggested but got some formatting error due to lack of an indent. Now I'm getting an error that BaseLLMProvider is not defined. I apologize if the last two lines of code are garbage, I'm an aspiring amateur at best trying to make this all work. Got Autogen working with LM Studio now I just need Autogroq to complete the think tank.

User-specific configurations

LLM_PROVIDER = "lmstudio" GROQ_API_URL = "https://api.groq.com/openai/v1/chat/completions" LMSTUDIO_API_URL = "http://localhost:1234/v1/chat/completions" OLLAMA_API_URL = "http://127.0.0.1:11434/api/generate" OPENAI_API_KEY = "0987654321" OPENAI_API_URL = "https://api.openai.com/v1/chat/completions"

class LmstudioProvider(BaseLLMProvider): def init(self, api_url, api_key=None): self.api_url = api_url self.api_key = api_key

jsarsoun commented 1 month ago

Try this.

class LmstudioProvider(BaseLLMProvider): def init(self, api_url, api_key=None): self.api_url = "http://localhost:1234/v1/chat/completions"

On Sun, Jun 2, 2024 at 9:07 PM Shake-Shifter @.***> wrote:

So I'm having the same issue here, I added the code you suggested but got some formatting error due to lack of an indent. Now I'm getting an error that BaseLLMProvider is not defined. I apologize if the last two lines of code are garbage, I'm an aspiring amateur at best trying to make this all work. Got Autogen working with LM Studio now I just need Autogroq to complete the think tank. User-specific configurations

LLM_PROVIDER = "lmstudio" GROQ_API_URL = "https://api.groq.com/openai/v1/chat/completions" LMSTUDIO_API_URL = "http://localhost:1234/v1/chat/completions" OLLAMA_API_URL = "http://127.0.0.1:11434/api/generate" OPENAI_API_KEY = "0987654321" OPENAI_API_URL = "https://api.openai.com/v1/chat/completions"

class LmstudioProvider(BaseLLMProvider): def init(self, api_url, api_key=None): self.api_url = api_url self.api_key = api_key

— Reply to this email directly, view it on GitHub https://github.com/jgravelle/AutoGroq/issues/35#issuecomment-2144236731, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT62IFLA2SYM46GQADW7KKDZFPTZ5AVCNFSM6AAAAABIVJHXOOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNBUGIZTMNZTGE . You are receiving this because you modified the open/close state.Message ID: @.***>

Shake-Shifter commented 1 month ago

Still getting an error:

NameError: name 'BaseLLMProvider' is not defined Traceback: File "C:\Users\shake.conda\envs\Ag\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 600, in _run_script exec(code, module.dict) File "C:\Users\shake\Ag\AutoGroq\AutoGroq\main.py", line 3, in from config import LLM_PROVIDER, MODEL_TOKEN_LIMITS File "C:\Users\shake\Ag\Autogroq\AutoGroq\config.py", line 17, in from config_local import * File "C:\Users\shake\Ag\Autogroq\AutoGroq\config_local.py", line 10, in class LmstudioProvider(BaseLLMProvider): ^^^^^^^^^^^^^^^

jgravelle commented 1 month ago

Is this your model in LM Studio?: instructlab/granite-7b-lab-GGUF

If not, you'll have to tweak your config.py...

Shake-Shifter commented 1 month ago

It's not and I did actually go through my config.py and change the entries to match the model I'm using before getting this error. This is my config.py

import os

Get user home directory

home_dir = os.path.expanduser("~") default_db_path = f'{home_dir}/.autogenstudio/database.sqlite'

Default configurations

DEFAULT_LLM_PROVIDER = "groq" DEFAULT_GROQ_API_URL = "https://api.groq.com/openai/v1/chat/completions" DEFAULT_LMSTUDIO_API_URL = "http://localhost:1234/v1/chat/completions" DEFAULT_OLLAMA_API_URL = "http://127.0.0.1:11434/api/generate" DEFAULT_OPENAI_API_KEY = None DEFAULT_OPENAI_API_URL = "https://api.openai.com/v1/chat/completions"

Try to import user-specific configurations from config_local.py

try: from config_local import * except ImportError: pass

Set the configurations using the user-specific values if available, otherwise use the defaults

LLM_PROVIDER = locals().get('LLM_PROVIDER', DEFAULT_LLM_PROVIDER) GROQ_API_URL = locals().get('GROQ_API_URL', DEFAULT_GROQ_API_URL) LMSTUDIO_API_URL = locals().get('LMSTUDIO_API_URL', DEFAULT_LMSTUDIO_API_URL) OLLAMA_API_URL = locals().get('OLLAMA_API_URL', DEFAULT_OLLAMA_API_URL) OPENAI_API_KEY = locals().get('OPENAI_API_KEY', DEFAULT_OPENAI_API_KEY) OPENAI_API_URL = locals().get('OPENAI_API_URL', DEFAULT_OPENAI_API_URL)

API_KEY_NAMES = { "groq": "GROQ_API_KEY", "lmstudio": None, "ollama": None, "openai": "OPENAI_API_KEY",

Add other LLM providers and their respective API key names here

}

Retry settings

MAX_RETRIES = 3 RETRY_DELAY = 2 # in seconds RETRY_TOKEN_LIMIT = 5000

Model configurations

if LLM_PROVIDER == "groq": API_URL = GROQ_API_URL MODEL_TOKEN_LIMITS = { 'mixtral-8x7b-32768': 32768, 'llama3-70b-8192': 8192, 'llama3-8b-8192': 8192, 'gemma-7b-it': 8192, } elif LLM_PROVIDER == "lmstudio": API_URL = LMSTUDIO_API_URL MODEL_TOKEN_LIMITS = { 'Qwen/CodeQwen1.5-7B-Chat-GGUF': 64000, } elif LLM_PROVIDER == "openai": API_URL = OPENAI_API_URL MODEL_TOKEN_LIMITS = { 'gpt-4o': 4096, } elif LLM_PROVIDER == "ollama": API_URL = OLLAMA_API_URL MODEL_TOKEN_LIMITS = { 'llama3': 8192, }
else: MODEL_TOKEN_LIMITS = {}

Database path

AUTOGEN_DB_PATH="/path/to/custom/database.sqlite"

AUTOGEN_DB_PATH = os.environ.get('AUTOGEN_DB_PATH', default_db_path)

MODEL_CHOICES = { 'default': None, 'gemma-7b-it': 8192, 'gpt-4o': 4096, 'Qwen/CodeQwen1.5-7B-Chat-GGUF': 64000, 'llama3': 8192, 'llama3-70b-8192': 8192, 'llama3-8b-8192': 8192, 'mixtral-8x7b-32768': 32768 }

jgravelle commented 1 month ago

Can't replicate. Godspeed, weary traveler...

Shake-Shifter commented 1 month ago

I put the error, the config.py and config_local.py into gpt and it said:

The error message indicates that BaseLLMProvider is not defined. This typically happens when the module or class BaseLLMProvider is not imported or not available in the current namespace.

From your config.py file, it seems like BaseLLMProvider should be imported from somewhere. However, in the provided code, I don't see any import statement for BaseLLMProvider.

To fix this issue, you need to ensure that BaseLLMProvider is imported correctly before it's referenced. If BaseLLMProvider is supposed to be part of the config_local.py file, then you should make sure that it's defined there or imported from wherever it's defined.

When your trying to run it off llm studio to test for the issue what's your config.py and config_local.py look like? There's got to be something you're doing differently that's making it load.

jsarsoun commented 1 month ago

looking at what you originally posted, do you have this text in your config_local.py file? If so, it shouldn't be there. Also, if you have the most recent committed code, the original issue of the api_key error has been fixed.

class LmstudioProvider(BaseLLMProvider): def init(self, api_url, api_key=None): self.api_url = api_url self.api_key = api_key

On Tue, Jun 4, 2024 at 2:29 PM Shake-Shifter @.***> wrote:

I put the error, the config.py and config_local.py into gpt and it said:

The error message indicates that BaseLLMProvider is not defined. This typically happens when the module or class BaseLLMProvider is not imported or not available in the current namespace.

From your config.py file, it seems like BaseLLMProvider should be imported from somewhere. However, in the provided code, I don't see any import statement for BaseLLMProvider.

To fix this issue, you need to ensure that BaseLLMProvider is imported correctly before it's referenced. If BaseLLMProvider is supposed to be part of the config_local.py file, then you should make sure that it's defined there or imported from wherever it's defined.

When your trying to run it off llm studio to test for the issue what's your config.py and config_local.py look like? There's got to be something you're doing differently that's making it load.

— Reply to this email directly, view it on GitHub https://github.com/jgravelle/AutoGroq/issues/35#issuecomment-2148446174, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT62IFP63JAA2LRC4RCHJTTZFYWR3AVCNFSM6AAAAABIVJHXOOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNBYGQ2DMMJXGQ . You are receiving this because you modified the open/close state.Message ID: @.***>

Shake-Shifter commented 1 month ago

So I had the version of that you put before which was slightly different:

class LmstudioProvider(BaseLLMProvider): def init(self, api_url, api_key=None): self.api_url = "http://localhost:1234/v1/chat/completions"

Which gave the BaseLLMProvider not defined. However after posting the underscore on either side of "init" aren't showing up but they did in your post I copied it from.

If I use the version you just posted with the "*" in place of "_" I get the following error

SyntaxError: File "C:\Users\shake\Ag\Autogroq\AutoGroq\config_local.py", line 11 def init(self, api_url, api_key=None): ^ SyntaxError: invalid syntax Traceback: File "C:\Users\shake.conda\envs\Ag\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 600, in _run_script exec(code, module.dict) File "C:\Users\shake\Ag\AutoGroq\AutoGroq\main.py", line 3, in from config import LLM_PROVIDER, MODEL_TOKEN_LIMITS File "C:\Users\shake\Ag\Autogroq\AutoGroq\config.py", line 17, in from config_local import *

Shake-Shifter commented 1 month ago

I did also just download the latest config.py, didn't bother downloading the latest config_local.py as it didn't look like anything had changed, and there was no difference for the syntax error.

Shake-Shifter commented 1 month ago

I also tried removing the asterisks just because it highlighted "init" in the syntax error and also because they dissapeared after I posted it in here for some reason so I ended up with this:

def init(self, api_url, api_key=None):

Instead of:

def ""init""(self, api_url, api_key=None):

I tried putting the asterisks in quotations just to keep them from disappearing in the post here. But for some reason they still become invisible and init becomes italicized

But making that change just gave the "BaseLLMProvider" not defined error again anyways so made no difference.

Shake-Shifter commented 1 month ago

Eureka! So based off your comment about having solved the original problem and the files having been updated I decided to ditch the effort to patch the problem and try to update my repo with your updated files. Ran into some crap about can't update cause it would mess up my config.py but did some hard reset thinga ma bob and then I was able to pull the updated files. Initially it still threw the BaseLLMProvider not defined cause stupid me didn't erase that stuff we threw in the config_local.py yet. Deleted that, put my model back into the updated config.py, reran AutoGroq and we're cookin with gasoline over here now buddy!

Thanks again, and I'll see you in the future

jgravelle commented 1 month ago

Glad it worked out... https://discord.gg/DXjFPX84gs