Cinnamon / kotaemon

An open-source RAG-based tool for chatting with your documents.
https://cinnamon.github.io/kotaemon/
Apache License 2.0
12.49k stars 934 forks source link

[BUG] - <title>'message': 'Incorrect API key provided: <API_KEY>. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}\n", "source": "Error code: 401 - {'error': {'message': 'Incorrect API key provided: <API_KEY>. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key' #332

Open zzll22 opened 2 days ago

zzll22 commented 2 days ago

Description

The Graph RAG index is not successful. Using openai and local ollama will also report the error invalid_api_key. How to solve it?

Reproduction steps

1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error

Screenshots

![DESCRIPTION](LINK.png)

Logs

{"type": "error", "data": "Error Invoking LLM", "stack": "Traceback (most recent call last):\n  File \"/opt/anaconda3/envs/kotaemon/lib/python3.10/site-packages/graphrag/llm/base/base_llm.py\", line 53, in _invoke\n    output = await self._execute_llm(input, **kwargs)\n  File \"/opt/anaconda3/envs/kotaemon/lib/python3.10/site-packages/graphrag/llm/openai/openai_chat_llm.py\", line 53, in _execute_llm\n    completion = await self.client.chat.completions.create(\n  File \"/opt/anaconda3/envs/kotaemon/lib/python3.10/site-packages/openai/resources/chat/completions.py\", line 1412, in create\n    return await self._post(\n  File \"/opt/anaconda3/envs/kotaemon/lib/python3.10/site-packages/openai/_base_client.py\", line 1816, in post\n    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)\n  File \"/opt/anaconda3/envs/kotaemon/lib/python3.10/site-packages/openai/_base_client.py\", line 1510, in request\n    return await self._request(\n  File \"/opt/anaconda3/envs/kotaemon/lib/python3.10/site-packages/openai/_base_client.py\", line 1611, in _request\n    raise self._make_status_error_from_response(err.response) from None\nopenai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: <API_KEY>. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}\n", "source": "Error code: 401 - {'error': {'message': 'Incorrect API key provided: <API_KEY>. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}"

Browsers

No response

OS

MacOS

Additional information

No response

taprosoft commented 2 days ago

If you are using Linux / MacOs, please use export $(cat .env | xargs) to load environments variables in .env file prior to running app.py.

Or, please make sure that you setup these environment variables

# settings for GraphRAG
GRAPHRAG_API_KEY=openai_key
GRAPHRAG_LLM_MODEL=gpt-4o-mini
GRAPHRAG_EMBEDDING_MODEL=text-embedding-3-small
zzll22 commented 1 day ago

@taprosoft Thank you very much, this is a great help to me. I would like to ask how to set up local ollama to use GraphRAG?

Lee-Ju-Yeong commented 19 hours ago

If you are using Linux / MacOs, please use export $(cat .env | xargs) to load environments variables in .env file prior to running app.py.

Or, please make sure that you setup these environment variables

# settings for GraphRAG
GRAPHRAG_API_KEY=openai_key
GRAPHRAG_LLM_MODEL=gpt-4o-mini
GRAPHRAG_EMBEDDING_MODEL=text-embedding-3-small

export $(cat .env | xargs) export: not valid in this context: https://acrobatservices.adobe.com/dc-integration-creation-app-cdn/main.html?api export: not valid in this context: PDF.js

I do not want to remove elements like PDF.js or API_URL="https://acrobatservices.adobe.com/dc-integration-creation-app-cdn/main.html?api" from the .env file.

taprosoft commented 16 hours ago

@Lee-Ju-Yeong in that case please remove all empty lines and comment start with # in the .env and try again.

taprosoft commented 16 hours ago

@zzll22 for Ollama support with GraphRAG we are currently working on a official guide to make it easy and consistent. Meanwhile you could refer to https://medium.com/@ysaurabh059/graphrag-local-setup-via-vllm-and-ollama-a-detailed-integration-guide-5d85f18f7fec for manual customization.