Closed silviachen46 closed 3 months ago
@silviachen46 got same error...let me know if you find a solution
It currently dosent support GROQQ_API_KEY, you have to have OPEN AI api key only
In my case only have Openai API key and got same error.
same here. I tried with openai api and together api, both met invalid api issue as above.
@lauraparra28 @maverick001 Hey, I managed to get it work locally with groq+nomic embedding with Ollama. I modified my code as user spacelearner has said in issue #345 I guess there's some compatibility issues.
Describe the bug
I was using groq for general completion and openai for embeddings. For some reason it keeps giving me invalid api error in the log, but my api token is actually valid(I test it out with a couple of tasks and they all worked). It's most likely that this is not caused by groq, since:
Steps to reproduce
Expected Behavior
No response
GraphRAG Config Used
encoding_model: cl100k_base skip_workflows: [] llm: api_key: ${GROQQ_API_KEY} type: openai_chat # or azure_openai_chat model: llama3-8b-8192 model_supports_json: true # recommended if this is available for your model.
max_tokens: 4000
request_timeout: 180.0
api_base: https://api.groq.com/openai/v1
api_version: 2024-02-15-preview
organization:
deployment_name:
tokens_per_minute: 20000 # set a leaky bucket throttle requests_per_minute: 30 # set a leaky bucket throttle
max_retries: 3
max_retry_wait: 10.0
sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times
oncurrent_requests: 1 # the number of parallel inflight requests that may be made
parallelization: stagger: 0.3
num_threads: 50 # the number of threads to use for parallel processing
async_mode: threaded # or asyncio # embeddings:
parallelization: override the global parallelization settings for embeddings
async_mode: threaded # or asyncio llm: api_key: ${GRAPHRAG_API_KEY} type: openai_embedding model: text-embedding-3-small
Logs and screenshots
{"type": "error", "data": "Error Invoking LLM", "stack": "Traceback (most recent call last):\n File \"D:\python3.11\Lib\site-packages\graphrag\llm\base\base_llm.py\", line 53, in _invoke\n output = await self._execute_llm(input, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"D:\python3.11\Lib\site-packages\graphrag\llm\openai\openai_embeddings_llm.py\", line 36, in _execute_llm\n embedding = await self.client.embeddings.create(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"D:\python3.11\Lib\site-packages\openai\resources\embeddings.py\", line 215, in create\n return await self._post(\n ^^^^^^^^^^^^^^^^^\n File \"D:\python3.11\Lib\site-packages\openai\_base_client.py\", line 1805, in post\n return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"D:\python3.11\Lib\site-packages\openai\_base_client.py\", line 1503, in request\n return await self._request(\n ^^^^^^^^^^^^^^^^^^^^\n File \"D:\python3.11\Lib\site-packages\openai\_base_client.py\", line 1599, in _request\n raise self._make_status_error_from_response(err.response) from None\nopenai.AuthenticationError: Error code: 401 - {'error': {'message': 'Invalid API Key', 'type': 'invalid_request_error', 'code': 'invalid_api_key'}}\n", "source": "Error code: 401 - {'error': {'message': 'Invalid API Key', 'type': 'invalid_request_error', 'code': 'invalid_api_key'}}", "details": {"input": ["\"EMQX CLUSTER\":\"EMQX Cluster is a group of EMQX nodes that work together to provide a scalable and fault-tolerant messaging service.\"", "\"TOPIC B\":", "\"GEO\":", "\"EMQX ENTERPRISE LICENSE\":\"EMQX Enterprise License is a type of license provided by EMQX Enterprise.\"", "\"ACCOUNT MANAGER\":\"Account Manager is a person responsible for managing licenses and providing support.\"", "\"LICENSE\":\"License refers to a permission or authorization to use a product or service.\"", "\"EMQX ENTERPRISE LICENSE UPDATE\":\"EMQX Enterprise License Update is the process of updating a license for EMQX Enterprise.\"", "\"LOG\":\"Log refers to a record of events or activities, used for troubleshooting and monitoring purposes.\"", "\"SSL/TLS\":\"SSL/TLS is a cryptographic protocol used for secure communication over the internet.\"", "\"CERTIFICATE\":Here is a comprehensive summary of the data:\n\nThe CERTIFICATE is a digital file used for authentication and encryption purposes. It is employed in various contexts, including the TLS handshake process, to verify the identity of a website or server, and in MQTT connections to authenticate and encrypt data. In essence, the CERTIFICATE serves as a digital proof of identity, allowing entities to establish trust and secure communication with one another.", "\"CIPHER SUITE\":\"Cipher Suite is a set of algorithms used for encrypting and decrypting data.\"", "\"HANDSHAKE FAILURE\":Here is a comprehensive summary of the data:\n\n\"Handshake Failure\" is an error that occurs during the SSL/TLS connection handshake process.", "\"UNKNOWN CA\":Here is a comprehensive summary of the data:\n\n\"UNKNOWN CA\" is an error that occurs when the certificate verification fails. This type of error occurs when the certificate verification process is unable to verify the authenticity of a certificate, resulting in a failure to establish a secure connection.\n\nNote: I resolved the contradictions by combining the two descriptions into a single, coherent summary.", "\"SERVER\":Here is a comprehensive summary of the data:\n\nThe \"SERVER\" is a device or application that provides MQTT services to clients, offering a platform for message-based communication. Additionally, the server is also a party involved in the TLS (Transport Layer Security) handshake process, indicating its role in securing communication between clients and other entities.", "\"INTERMEDIATE CA\":\"Intermediate CA is a trusted certificate authority that issues certificates to other organizations.\"", "\"ROOT CA\":\"Root CA is a trusted certificate authority that issues certificates to intermediate CAs.\""]}}
Additional Information