Open Harvester62 opened 1 week ago
I am not a dev, so please, be patient with me. About the console messages, after several different attempts to run a query, I did notice that the messages stay always the same, and I suspect the error might be due the the lack of a Reranking solution, which I didn't set up, not having a Cohere account. Just guessing.
Would it be feasible to set a specific model (local/online) for re-ranking? What peculiarity should such a model have to re-rank results properly? These questions might be due to the lack of knowledge about how re-ranking works, sorry.
@Harvester62 for GraphRAG to run, make sure that you have set these environment variables in Windows Terminal correctly
# settings for GraphRAG
GRAPHRAG_API_KEY=<YOUR_OPENAI_KEY>
GRAPHRAG_LLM_MODEL=gpt-4o-mini
GRAPHRAG_EMBEDDING_MODEL=text-embedding-3-small
Would it be feasible to set a specific model (local/online) for re-ranking? What peculiarity should such a model have to re-rank results properly?
By default it should work without rerankings. For more specific suggestion please add the Console log.
This is my current .env file content. I did also a SET OPENAI_API_KEY and SET GRAPHRAG_API_KEY commands inside a Windows console, just in case, but that apparently didn't fix the problem either. Here follow the console messages after restarting the application, and then loading manually a single file in the UI and query it.
Microsoft Windows [Versione 10.0.19045.4957] (c) Microsoft Corporation. Tutti i diritti sono riservati.
C:\Users\Riccardo>set OPENAI_API_KEY=sk-[MYKEY]
C:\Users\Riccardo>set GRAPHRAG_API_KEY=sk-[MYKEY]
C:\Users\Riccardo>cd \kotaemon-app
C:\kotaemon-app>.\scripts\run_windows.bat
Setting up Git
Git is installed at C:\kotaemon-app\install_dir\Git git version 2.46.0.windows.1 Git is added to PATH for this session
Setting up Miniconda
Conda is installed at C:\kotaemon-app\install_dir\conda Conda version: conda 24.7.1
Creating conda environment
Conda environment exists at C:\kotaemon-app\install_dir\env Activate conda environment at C:\kotaemon-app\install_dir\env
Installing Kotaemon
Dependencies are already installed
Setting up a local model
Local model not found: llama3.1:8b
Downloading and extracting PDF.js
Directory already exists: C:\kotaemon-app\libs\ktem\ktem\assets\prebuilt\pdfjs-4.0.379-dist
Launching Kotaemon in your browser, please wait...
Starting Kotaemon UI... (prebuilt PDF.js is at C:\kotaemon-app\libs\ktem\ktem\assets\prebuilt\pdfjs-4.0.379-dist) [nltk_data] Downloading package punkt_tab to C:\kotaemon- [nltk_data] app\install_dir\env\lib\site- [nltk_data] packages\llama_index\core_static/nltk_cache... [nltk_data] Package punkt_tab is already up-to-date! GraphRAG dependencies not installed. GraphRAG retriever pipeline will not work properly. User "admin" already exists Setting up quick upload event Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.
User-id: None, can see public conversations: False
User-id: 1, can see public conversations: True
Overriding with default loaders
use_quick_index_mode False
reader_mode default
Using reader TxtReader()
User-id: 1, can see public conversations: True
Session reasoning type simple
Session LLM openai
Reasoning class <class 'ktem.reasoning.simple.FullQAPipeline'>
Traceback (most recent call last):
File "C:\kotaemon-app\install_dir\env\lib\site-packages\gradio\queueing.py", line 536, in process_events
response = await route_utils.call_process_api(
File "C:\kotaemon-app\install_dir\env\lib\site-packages\gradio\route_utils.py", line 276, in call_process_api
output = await app.get_blocks().process_api(
File "C:\kotaemon-app\install_dir\env\lib\site-packages\gradio\blocks.py", line 1923, in process_api
result = await self.call_function(
File "C:\kotaemon-app\install_dir\env\lib\site-packages\gradio\blocks.py", line 1520, in call_function
prediction = await utils.async_iteration(iterator)
File "C:\kotaemon-app\install_dir\env\lib\site-packages\gradio\utils.py", line 663, in async_iteration
return await iterator.anext()
File "C:\kotaemon-app\install_dir\env\lib\site-packages\gradio\utils.py", line 656, in anext
return await anyio.to_thread.run_sync(
File "C:\Users\Riccardo\AppData\Roaming\Python\Python310\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\Riccardo\AppData\Roaming\Python\Python310\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\Users\Riccardo\AppData\Roaming\Python\Python310\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, args)
File "C:\kotaemon-app\install_dir\env\lib\site-packages\gradio\utils.py", line 639, in run_sync_iterator_async
return next(iterator)
File "C:\kotaemon-app\install_dir\env\lib\site-packages\gradio\utils.py", line 801, in gen_wrapper
response = next(iterator)
File "C:\kotaemon-app\install_dir\env\lib\site-packages\ktem\pages\chat__init.py", line 857, in chat_fn
pipeline, reasoning_state = self.create_pipeline(
File "C:\kotaemon-app\install_dir\env\lib\site-packages\ktem\pages\chat\init__.py", line 824, in create_pipeline
iretrievers = index.get_retriever_pipelines(
File "C:\kotaemon-app\install_dir\env\lib\site-packages\ktem\index\file\index.py", line 423, in get_retriever_pipelines
obj = cls.get_pipeline(stripped_settings, self.config, selected_ids)
File "C:\kotaemon-app\install_dir\env\lib\site-packages\ktem\index\file\pipelines.py", line 287, in get_pipeline
"reranking", reranking_models_manager.get_default_name()
File "C:\kotaemon-app\install_dir\env\lib\site-packages\ktem\rerankings\manager.py", line 108, in get_default_name
raise ValueError("No models in pool")
ValueError: No models in pool
Traceback (most recent call last):
File "C:\kotaemon-app\install_dir\env\lib\site-packages\gradio\queueing.py", line 536, in process_events
response = await route_utils.call_process_api(
File "C:\kotaemon-app\install_dir\env\lib\site-packages\gradio\route_utils.py", line 276, in call_process_api
output = await app.get_blocks().process_api(
File "C:\kotaemon-app\install_dir\env\lib\site-packages\gradio\blocks.py", line 1923, in process_api
result = await self.call_function(
File "C:\kotaemon-app\install_dir\env\lib\site-packages\gradio\blocks.py", line 1508, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "C:\Users\Riccardo\AppData\Roaming\Python\Python310\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\Riccardo\AppData\Roaming\Python\Python310\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\Users\Riccardo\AppData\Roaming\Python\Python310\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, args)
File "C:\kotaemon-app\install_dir\env\lib\site-packages\gradio\utils.py", line 818, in wrapper
response = f(*args, kwargs)
File "C:\kotaemon-app\install_dir\env\lib\site-packages\ktem\pages\chat__init.py", line 958, in check_and_suggest_name_conv
suggested_name = suggest_pipeline(chat_history).text
File "C:\kotaemon-app\install_dir\env\lib\site-packages\theflow\base.py", line 1097, in call
raise e from None
File "C:\kotaemon-app\install_dir\env\lib\site-packages\theflow\base.py", line 1088, in call
output = self.fl.exec(func, args, kwargs)
File "C:\kotaemon-app\install_dir\env\lib\site-packages\theflow\backends\base.py", line 151, in exec
return run(*args, **kwargs)
File "C:\kotaemon-app\install_dir\env\lib\site-packages\theflow\middleware.py", line 144, in call
raise e from None
File "C:\kotaemon-app\install_dir\env\lib\site-packages\theflow\middleware.py", line 141, in call__
_output = self.next_call(*args, kwargs)
File "C:\kotaemon-app\install_dir\env\lib\site-packages\theflow\middleware.py", line 117, in call
return self.next_call(*args, kwargs)
File "C:\kotaemon-app\install_dir\env\lib\site-packages\theflow\base.py", line 1017, in _runx
return self.run(*args, *kwargs)
File "C:\kotaemon-app\install_dir\env\lib\site-packages\ktem\reasoning\prompt_optimization\suggest_conversation_name.py", line 32, in run
messages.append(AIMessage(content=ai))
File "C:\kotaemon-app\install_dir\env\lib\site-packages\kotaemon\base\schema.py", line 62, in init
super().init(args, kwargs)
TypeError: AIMessage.init() missing 1 required positional argument: 'content'
That "no models in pool", and the last line complaining about a missing content are suspicious. Which models? What content is missing?
Can you check the solution mentioned in https://github.com/Cinnamon/kotaemon/issues/394
Description
I have just updated my local Kotaemon installation using the update_windows.bat successfully (see attached console messages), and immediately after that I started run_windows.bat, all without errors, then inside the UI I added a .txt document to GraphRag which was successfully indexed and embedded (no errors). But, as soon as I select that doc in the GraphRag and search within it I get an error.
I am attaching the entire console messages from the very beginning, the initial update, then the document load and search error. I hope this might be useful for the devs. kotaemon-update-065-069_graphrag_error.txt
Reproduction steps
Screenshots
Logs
Browsers
Firefox
OS
Windows
Additional information
The problem is that even if I do not select any collection, just type "Hello" the Error arises and there is nothing I can do to prevent it. The models are all working fine (tested with the test function within Kotaemon UI).