OpenSPG / KAG

KAG is a logical form-guided reasoning and retrieval framework based on OpenSPG engine and LLMs. It is used to build logical reasoning and factual Q&A solutions for professional domain knowledge bases. It can effectively overcome the shortcomings of the traditional RAG vector similarity calculation model.
https://spg.openkg.cn/en-US
Apache License 2.0
625 stars 46 forks source link

Query dialogue error #9

Closed JV-X closed 2 days ago

JV-X commented 3 weeks ago

After I started the kag service in docker, I uploaded a knowledge base file in txt format and started a conversation in the new query dialog. But no matter what question I asked, the answer returned was an error. how to resolve this?

full output error:

Execution failedpemja.core.PythonException: <class 'ValueError'>: not enough values to unpack (expected 2, got 0)
    at /openspg_venv/lib/python3.8/site-packages/kag/solver/main_solver.invoke(main_solver.py:43)
    at /openspg_venv/lib/python3.8/site-packages/kag/solver/logic/solver_pipeline.run(solver_pipeline.py:54)
    at /openspg_venv/lib/python3.8/site-packages/kag/solver/implementation/default_reasoner.reason(default_reasoner.py:63)
    at /openspg_venv/lib/python3.8/site-packages/kag/solver/implementation/default_lf_planner.lf_planing(default_lf_planner.py:46)
    at pemja.core.PythonInterpreter.invokeMethod(Native Method)
    at pemja.core.PythonInterpreter.invokeMethod(PythonInterpreter.java:118)
    at com.antgroup.openspgapp.core.reasoner.service.impl.TaskRunner$NlQueryTask.call(TaskRunner.java:143)
    at com.antgroup.openspgapp.core.reasoner.service.impl.TaskRunner$NlQueryTask.call(TaskRunner.java:122)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:750)
caszkgui commented 3 weeks ago

After I started the kag service in docker, I uploaded a knowledge base file in txt format and started a conversation in the new query dialog. But no matter what question I asked, the answer returned was an error. how to resolve this?

full output error:

Execution failedpemja.core.PythonException: <class 'ValueError'>: not enough values to unpack (expected 2, got 0)
  at /openspg_venv/lib/python3.8/site-packages/kag/solver/main_solver.invoke(main_solver.py:43)
  at /openspg_venv/lib/python3.8/site-packages/kag/solver/logic/solver_pipeline.run(solver_pipeline.py:54)
  at /openspg_venv/lib/python3.8/site-packages/kag/solver/implementation/default_reasoner.reason(default_reasoner.py:63)
  at /openspg_venv/lib/python3.8/site-packages/kag/solver/implementation/default_lf_planner.lf_planing(default_lf_planner.py:46)
  at pemja.core.PythonInterpreter.invokeMethod(Native Method)
  at pemja.core.PythonInterpreter.invokeMethod(PythonInterpreter.java:118)
  at com.antgroup.openspgapp.core.reasoner.service.impl.TaskRunner$NlQueryTask.call(TaskRunner.java:143)
  at com.antgroup.openspgapp.core.reasoner.service.impl.TaskRunner$NlQueryTask.call(TaskRunner.java:122)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  at java.lang.Thread.run(Thread.java:750)

It seems that your llm service is inaccessible in the container. Can you post your llm configuration? In addition, you can refer to the following method to test the connectivity of the llm in the container:

curl https://your llm service/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <API Key>" \
  -d '{
        "model": "deepseek-chat",
        "messages": [
          {"role": "system", "content": "You are a helpful assistant."},
          {"role": "user", "content": "Hello!"}
        ],
        "stream": false
      }'
JV-X commented 3 weeks ago

After I started the kag service in docker, I uploaded a knowledge base file in txt format and started a conversation in the new query dialog. But no matter what question I asked, the answer returned was an error. how to resolve this? full output error:

Execution failedpemja.core.PythonException: <class 'ValueError'>: not enough values to unpack (expected 2, got 0)
    at /openspg_venv/lib/python3.8/site-packages/kag/solver/main_solver.invoke(main_solver.py:43)
    at /openspg_venv/lib/python3.8/site-packages/kag/solver/logic/solver_pipeline.run(solver_pipeline.py:54)
    at /openspg_venv/lib/python3.8/site-packages/kag/solver/implementation/default_reasoner.reason(default_reasoner.py:63)
    at /openspg_venv/lib/python3.8/site-packages/kag/solver/implementation/default_lf_planner.lf_planing(default_lf_planner.py:46)
    at pemja.core.PythonInterpreter.invokeMethod(Native Method)
    at pemja.core.PythonInterpreter.invokeMethod(PythonInterpreter.java:118)
    at com.antgroup.openspgapp.core.reasoner.service.impl.TaskRunner$NlQueryTask.call(TaskRunner.java:143)
    at com.antgroup.openspgapp.core.reasoner.service.impl.TaskRunner$NlQueryTask.call(TaskRunner.java:122)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:750)

It seems that your llm service is inaccessible in the container. Can you post your llm configuration? In addition, you can refer to the following method to test the connectivity of the llm in the container:

curl https://your llm service/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <API Key>" \
  -d '{
        "model": "deepseek-chat",
        "messages": [
          {"role": "system", "content": "You are a helpful assistant."},
          {"role": "user", "content": "Hello!"}
        ],
        "stream": false
      }'

OK, according to your prompt, I found that my llm service did not start. I started the llm service according to the prompts in section 1.2 of this document (https://openspg.yuque.com/ndx6g9/0.5/cfiaugez2n72g08k). Now the local llm service is connected.

I run kag from docker according to the document. Where can I find the kag_config.cfg configuration file? The document does not seem to mention the location of this configuration file, and I did not find it in WSL2 where I started the llm service,also I didn't see it in the release-openspg-server container.

I see several kag_config.cfg files in the source code, but I don't see any in docker. Is it because I have to install it from source code? But I see in the documentation that there is no front-end when installing from source code

caszkgui commented 3 weeks ago

After I started the kag service in docker, I uploaded a knowledge base file in txt format and started a conversation in the new query dialog. But no matter what question I asked, the answer returned was an error. how to resolve this? full output error:

Execution failedpemja.core.PythonException: <class 'ValueError'>: not enough values to unpack (expected 2, got 0)
  at /openspg_venv/lib/python3.8/site-packages/kag/solver/main_solver.invoke(main_solver.py:43)
  at /openspg_venv/lib/python3.8/site-packages/kag/solver/logic/solver_pipeline.run(solver_pipeline.py:54)
  at /openspg_venv/lib/python3.8/site-packages/kag/solver/implementation/default_reasoner.reason(default_reasoner.py:63)
  at /openspg_venv/lib/python3.8/site-packages/kag/solver/implementation/default_lf_planner.lf_planing(default_lf_planner.py:46)
  at pemja.core.PythonInterpreter.invokeMethod(Native Method)
  at pemja.core.PythonInterpreter.invokeMethod(PythonInterpreter.java:118)
  at com.antgroup.openspgapp.core.reasoner.service.impl.TaskRunner$NlQueryTask.call(TaskRunner.java:143)
  at com.antgroup.openspgapp.core.reasoner.service.impl.TaskRunner$NlQueryTask.call(TaskRunner.java:122)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  at java.lang.Thread.run(Thread.java:750)

It seems that your llm service is inaccessible in the container. Can you post your llm configuration? In addition, you can refer to the following method to test the connectivity of the llm in the container:

curl https://your llm service/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <API Key>" \
  -d '{
        "model": "deepseek-chat",
        "messages": [
          {"role": "system", "content": "You are a helpful assistant."},
          {"role": "user", "content": "Hello!"}
        ],
        "stream": false
      }'

OK, according to your prompt, I found that my llm service did not start. I started the llm service according to the prompts in section 1.2 of this document (https://openspg.yuque.com/ndx6g9/0.5/cfiaugez2n72g08k). Now the local llm service is connected.

I run kag from docker according to the document. Where can I find the kag_config.cfg configuration file? The document does not seem to mention the location of this configuration file, and I did not find it in WSL2 where I started the llm service,also I didn't see it in the release-openspg-server container.

I see several kag_config.cfg files in the source code, but I don't see any in docker. Is it because I have to install it from source code? But I see in the documentation that there is no front-end when installing from source code

Kag provides two usage modes: product-based and code-based.

  1. For the use of product mode, please refer to the document: https://openspg.yuque.com/ndx6g9/wc9oyq/rgd8ecefccwd1ga5
  2. For the use of developer mode, please refer to the document: https://openspg.yuque.com/ndx6g9/wc9oyq/owp4sxbdip2u7uvv

In developer mode, kag_config.cfg is a configuration file consists of llm conf and vectorizer conf, which should be submitted to openspg server. Meanwhile, in Product mode, user can provide llm conf and vectorizer conf through web ui.

royzhao commented 3 weeks ago

refer this commit , i add some more information for call llm https://github.com/OpenSPG/KAG/pull/11/files

JV-X commented 3 weeks ago

refer this commit , i add some more information for call llm https://github.com/OpenSPG/KAG/pull/11/files

thanks for your reply.

JV-X commented 3 weeks ago

is

After I started the kag service in docker, I uploaded a knowledge base file in txt format and started a conversation in the new query dialog. But no matter what question I asked, the answer returned was an error. how to resolve this? full output error:

Execution failedpemja.core.PythonException: <class 'ValueError'>: not enough values to unpack (expected 2, got 0)
    at /openspg_venv/lib/python3.8/site-packages/kag/solver/main_solver.invoke(main_solver.py:43)
    at /openspg_venv/lib/python3.8/site-packages/kag/solver/logic/solver_pipeline.run(solver_pipeline.py:54)
    at /openspg_venv/lib/python3.8/site-packages/kag/solver/implementation/default_reasoner.reason(default_reasoner.py:63)
    at /openspg_venv/lib/python3.8/site-packages/kag/solver/implementation/default_lf_planner.lf_planing(default_lf_planner.py:46)
    at pemja.core.PythonInterpreter.invokeMethod(Native Method)
    at pemja.core.PythonInterpreter.invokeMethod(PythonInterpreter.java:118)
    at com.antgroup.openspgapp.core.reasoner.service.impl.TaskRunner$NlQueryTask.call(TaskRunner.java:143)
    at com.antgroup.openspgapp.core.reasoner.service.impl.TaskRunner$NlQueryTask.call(TaskRunner.java:122)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:750)

It seems that your llm service is inaccessible in the container. Can you post your llm configuration? In addition, you can refer to the following method to test the connectivity of the llm in the container:

curl https://your llm service/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <API Key>" \
  -d '{
        "model": "deepseek-chat",
        "messages": [
          {"role": "system", "content": "You are a helpful assistant."},
          {"role": "user", "content": "Hello!"}
        ],
        "stream": false
      }'

OK, according to your prompt, I found that my llm service did not start. I started the llm service according to the prompts in section 1.2 of this document (https://openspg.yuque.com/ndx6g9/0.5/cfiaugez2n72g08k). Now the local llm service is connected. I run kag from docker according to the document. Where can I find the kag_config.cfg configuration file? The document does not seem to mention the location of this configuration file, and I did not find it in WSL2 where I started the llm service,also I didn't see it in the release-openspg-server container. I see several kag_config.cfg files in the source code, but I don't see any in docker. Is it because I have to install it from source code? But I see in the documentation that there is no front-end when installing from source code

Kag provides two usage modes: product-based and code-based.

  1. For the use of product mode, please refer to the document: https://openspg.yuque.com/ndx6g9/wc9oyq/rgd8ecefccwd1ga5
  2. For the use of developer mode, please refer to the document: https://openspg.yuque.com/ndx6g9/wc9oyq/owp4sxbdip2u7uvv

In developer mode, kag_config.cfg is a configuration file consists of llm conf and vectorizer conf, which should be submitted to openspg server. Meanwhile, in Product mode, user can provide llm conf and vectorizer conf through web ui.

thanks,It seems that I should use product mode. Can you provide an example of using the local qwen model? I use the local qwen model built according to this document (https://openspg.yuque.com/ndx6g9/0.5/cfiaugez2n72g08k). It seems that there is no api_key and base_url. Can I just delete these two fields?

caszkgui commented 3 weeks ago

After I started the kag service in docker, I uploaded a knowledge base file in txt format and started a conversation in the new query dialog. But no matter what question I asked, the answer returned was an error. how to resolve this? full output error:

Execution failedpemja.core.PythonException: <class 'ValueError'>: not enough values to unpack (expected 2, got 0)
  at /openspg_venv/lib/python3.8/site-packages/kag/solver/main_solver.invoke(main_solver.py:43)
  at /openspg_venv/lib/python3.8/site-packages/kag/solver/logic/solver_pipeline.run(solver_pipeline.py:54)
  at /openspg_venv/lib/python3.8/site-packages/kag/solver/implementation/default_reasoner.reason(default_reasoner.py:63)
  at /openspg_venv/lib/python3.8/site-packages/kag/solver/implementation/default_lf_planner.lf_planing(default_lf_planner.py:46)
  at pemja.core.PythonInterpreter.invokeMethod(Native Method)
  at pemja.core.PythonInterpreter.invokeMethod(PythonInterpreter.java:118)
  at com.antgroup.openspgapp.core.reasoner.service.impl.TaskRunner$NlQueryTask.call(TaskRunner.java:143)
  at com.antgroup.openspgapp.core.reasoner.service.impl.TaskRunner$NlQueryTask.call(TaskRunner.java:122)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  at java.lang.Thread.run(Thread.java:750)

It seems that your llm service is inaccessible in the container. Can you post your llm configuration? In addition, you can refer to the following method to test the connectivity of the llm in the container:

curl https://your llm service/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <API Key>" \
  -d '{
        "model": "deepseek-chat",
        "messages": [
          {"role": "system", "content": "You are a helpful assistant."},
          {"role": "user", "content": "Hello!"}
        ],
        "stream": false
      }'

OK, according to your prompt, I found that my llm service did not start. I started the llm service according to the prompts in section 1.2 of this document (https://openspg.yuque.com/ndx6g9/0.5/cfiaugez2n72g08k). Now the local llm service is connected. I run kag from docker according to the document. Where can I find the kag_config.cfg configuration file? The document does not seem to mention the location of this configuration file, and I did not find it in WSL2 where I started the llm service,also I didn't see it in the release-openspg-server container. I see several kag_config.cfg files in the source code, but I don't see any in docker. Is it because I have to install it from source code? But I see in the documentation that there is no front-end when installing from source code

Kag provides two usage modes: product-based and code-based.

  1. For the use of product mode, please refer to the document: https://openspg.yuque.com/ndx6g9/wc9oyq/rgd8ecefccwd1ga5
  2. For the use of developer mode, please refer to the document: https://openspg.yuque.com/ndx6g9/wc9oyq/owp4sxbdip2u7uvv

In developer mode, kag_config.cfg is a configuration file consists of llm conf and vectorizer conf, which should be submitted to openspg server. Meanwhile, in Product mode, user can provide llm conf and vectorizer conf through web ui.

thanks,It seems that I should use product mode. Can you provide an example of using the local qwen model? I use the local qwen model built according to this document (https://openspg.yuque.com/ndx6g9/0.5/cfiaugez2n72g08k). It seems that there is no api_key and base_url. Can I just delete these two fields?

If you use local qwen model loaded by ollama, your llm configuration must contain variables as described in: https://openspg.yuque.com/ndx6g9/wc9oyq/klyg6wdt3giqklzf#dHxtx includes client_type, base_url, model.

tips: 1、in product mode, llm conf needs to be in json format. 2、your local qwen model service should be accessible in openspg container, you can refer to user guide and test your llm model accessibility.

JV-X commented 3 weeks ago

caszkgui

After I started the kag service in docker, I uploaded a knowledge base file in txt format and started a conversation in the new query dialog. But no matter what question I asked, the answer returned was an error. how to resolve this? full output error:

Execution failedpemja.core.PythonException: <class 'ValueError'>: not enough values to unpack (expected 2, got 0)
    at /openspg_venv/lib/python3.8/site-packages/kag/solver/main_solver.invoke(main_solver.py:43)
    at /openspg_venv/lib/python3.8/site-packages/kag/solver/logic/solver_pipeline.run(solver_pipeline.py:54)
    at /openspg_venv/lib/python3.8/site-packages/kag/solver/implementation/default_reasoner.reason(default_reasoner.py:63)
    at /openspg_venv/lib/python3.8/site-packages/kag/solver/implementation/default_lf_planner.lf_planing(default_lf_planner.py:46)
    at pemja.core.PythonInterpreter.invokeMethod(Native Method)
    at pemja.core.PythonInterpreter.invokeMethod(PythonInterpreter.java:118)
    at com.antgroup.openspgapp.core.reasoner.service.impl.TaskRunner$NlQueryTask.call(TaskRunner.java:143)
    at com.antgroup.openspgapp.core.reasoner.service.impl.TaskRunner$NlQueryTask.call(TaskRunner.java:122)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:750)

It seems that your llm service is inaccessible in the container. Can you post your llm configuration? In addition, you can refer to the following method to test the connectivity of the llm in the container:

curl https://your llm service/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <API Key>" \
  -d '{
        "model": "deepseek-chat",
        "messages": [
          {"role": "system", "content": "You are a helpful assistant."},
          {"role": "user", "content": "Hello!"}
        ],
        "stream": false
      }'

OK, according to your prompt, I found that my llm service did not start. I started the llm service according to the prompts in section 1.2 of this document (https://openspg.yuque.com/ndx6g9/0.5/cfiaugez2n72g08k). Now the local llm service is connected. I run kag from docker according to the document. Where can I find the kag_config.cfg configuration file? The document does not seem to mention the location of this configuration file, and I did not find it in WSL2 where I started the llm service,also I didn't see it in the release-openspg-server container. I see several kag_config.cfg files in the source code, but I don't see any in docker. Is it because I have to install it from source code? But I see in the documentation that there is no front-end when installing from source code

Kag provides two usage modes: product-based and code-based.

  1. For the use of product mode, please refer to the document: https://openspg.yuque.com/ndx6g9/wc9oyq/rgd8ecefccwd1ga5
  2. For the use of developer mode, please refer to the document: https://openspg.yuque.com/ndx6g9/wc9oyq/owp4sxbdip2u7uvv

In developer mode, kag_config.cfg is a configuration file consists of llm conf and vectorizer conf, which should be submitted to openspg server. Meanwhile, in Product mode, user can provide llm conf and vectorizer conf through web ui.

thanks,It seems that I should use product mode. Can you provide an example of using the local qwen model? I use the local qwen model built according to this document (https://openspg.yuque.com/ndx6g9/0.5/cfiaugez2n72g08k). It seems that there is no api_key and base_url. Can I just delete these two fields?

If you use local qwen model loaded by ollama, your llm configuration must contain variables as described in: https://openspg.yuque.com/ndx6g9/wc9oyq/klyg6wdt3giqklzf#dHxtx includes client_type, base_url, model.

tips: 1、in product mode, llm conf needs to be in json format. 2、your local qwen model service should be accessible in openspg container, you can refer to user guide and test your llm model accessibility.

The way I loaded is not ollama, but vllm. and my local qwen model service is accessible.In addition, I am still a little confused about kag_config.cfg. As you said: user can provide llm conf and vectorizer conf through web ui. Does it mean that I don't need to find the kag_config.cfg configuration file, but can directly write the json configuration in the interface below?

1730341856829

--Does it mean that I don't need to find the kag_config.cfg configuration file, but can directly write the json configuration in the interface below?

yes, you can directly write the json configuration in the interface