severian42 / GraphRAG-Local-UI

GraphRAG using Local LLMs - Features robust API and multiple apps for Indexing/Prompt Tuning/Query/Chat/Visualizing/Etc. This is meant to be the ultimate GraphRAG/KG local LLM app.
MIT License
1.51k stars 173 forks source link

Facing issues in running gradio app.py file #58

Open ahsanaliawan opened 1 month ago

ahsanaliawan commented 1 month ago
Screenshot 2024-07-25 at 12 36 27 PM

I am facing issues in running gradio app.py file. First issue I face while running app.py file is no module named plotly but when I install plotly using pip in conda environment, I again face multiple issues which are shown in screenshot.

severian42 commented 1 month ago

Hey! Thanks for catching this! The plotly install did drop off accidentally, sorry about that. The requirements have been updated.

For the other error, the .env file just needs those dummy-key variables placed back in there. Here is the corrected version that should allow the app to run:

LLM_PROVIDER=ollama  # Can be 'ollama' or 'openai' (only a variable in the .env file, not in the settings.yaml)
API_URL=http://localhost:8012
LLM_API_BASE=http://localhost:11434/v1
LLM_API_KEY=12345

EMBEDDINGS_PROVIDER=ollama  # Can be 'ollama' or 'openai' (only a variable in the .env file, not in the settings.yaml)
EMBEDDINGS_API_BASE=http://localhost:11434/api
LLM_MODEL=mistral-nemo:12b-instruct-2407-fp16
EMBEDDINGS_MODEL=nomic-embed-text:latest
EMBEDDINGS_API_KEY=12345

GRAPHRAG_API_KEY=12345
ROOT_DIR=indexing
INPUT_DIR=${ROOT_DIR}/output/${timestamp}/artifacts
API_PORT=8012
ahsanaliawan commented 1 month ago

Thanks for addressing the problem. But I am unable to run application and still faced issues. Also, I'm confused which step should I run after installing requirements.txt. After installing the requirements from requirements.txt, I am encountering issues while running the application. The errors I'm facing are as follows:

  1. Error Running index_app.py:

When I execute the command python index_app.py, I encounter the following errors: When I run "python index_app.py " I face following Errors:

Screenshot 2024-07-25 at 3 51 26 PM Screenshot 2024-07-25 at 3 52 04 PM
  1. Error in Web Interface:

Although the URL http://127.0.0.1:7861 is accessible, I face an issue when selecting "LLM" and "Embeddings" and then clicking "Save Configurations." The following error is displayed:

Screenshot 2024-07-25 at 3 53 59 PM
  1. Issue Running gradio app.py: When attempting to run the command gradio app.py, I encounter the same previous errors: Screenshot 2024-07-25 at 3 58 04 PM

I would be grateful for any guidance on how to address these issues.

Thank you for your assistance.

severian42 commented 1 month ago

Sorry you still have issues. After you install the requirements, each app takes a different launch approach. The index_app.py is meant to be launched alongside the api.py server (there is a API_README and INDEX_APP_README in the repo for more in-depth guides). The legacy app.py is a standalone but needs all the env variables to launch.

For the first one: It looks like it's not reading your root dir properly. I have made the root dir variable more dynamic, so you will need to adjust it to match the actual folder you initialized for index. If you are using the one from this repo, you will need to adjust the name ROOT_DIR name to ragtest. This is meant to allow for more flexibility in the folder you are indexing

The second issue with the models looks like a small bug that I just fixed in the index_app.py (update in the repo). Should work just fine now!

The third one looks like the env variable for the EMBEDDINGS_API is not in there. It's really just a dummy-key that we are passing through but it is still needed:

EMBEDDINGS_PROVIDER=ollama  # Can be 'ollama' or 'openai' (only a variable in the .env file, not in the settings.yaml)
EMBEDDINGS_API_BASE=http://localhost:11434/api
LLM_MODEL=mistral-nemo:12b-instruct-2407-fp16
EMBEDDINGS_MODEL=nomic-embed-text:latest
EMBEDDINGS_API_KEY=12345
severian42 commented 1 month ago

I just updated the Ollama embedding setup to avoid running into the common 'columns with wrong values' issue. There is an updated part in the main readme and also 2 new files that launch a proxy. My 3 tests on it worked so I hope it works for you as well! If not, you may want to consider using LM Studio until Ollama adjust their API