What
Created a PR to set up KG on macOS successfully. Related to issue #35.
Changes
Added a .devcontainer directory to the root path. This contains a Dockerfile and postCreateCommand.sh shell script to set up the environment. Note: This requires installing docker on the local machine first.
Added a sample .gpt_config.env file for quick modification and setup by users.
Commented certain libraries in the requirements.txt file which showed errors during installation, same as described in issue #35. Since I couldn't find any direct usage of these libraries, I commented them out to successfully install the dependencies and test KG_RAG using the instructions provided in the README file.
Changes to kg_rag/run_setup.py
-- Create LLM_CACHE_DIR does not exist
-- Raise value error if there's an exception encountered while downloading llama model.
Changes to kg_rag/utitlities.py
-- Modified the parameters getting passed to get_GPT_response when using interactive mode
Updated the .gitignore file to not check the LLM_CACHE_DIR and __pycache__ files
What Created a PR to set up KG on macOS successfully. Related to issue #35.
Changes
Added a
.devcontainer
directory to the root path. This contains aDockerfile
andpostCreateCommand.sh
shell script to set up the environment. Note: This requires installing docker on the local machine first.Added a sample
.gpt_config.env
file for quick modification and setup by users.Commented certain libraries in the
requirements.txt
file which showed errors during installation, same as described in issue #35. Since I couldn't find any direct usage of these libraries, I commented them out to successfully install the dependencies and test KG_RAG using the instructions provided in the README file.Changes to
kg_rag/run_setup.py
-- CreateLLM_CACHE_DIR
does not exist -- Raise value error if there's an exception encountered while downloading llama model.Changes to
kg_rag/utitlities.py
-- Modified the parameters getting passed toget_GPT_response
when using interactive modeUpdated the
.gitignore
file to not check theLLM_CACHE_DIR
and__pycache__
files