neo4j-labs / llm-graph-builder

Neo4j graph construction from unstructured data using LLMs
https://neo4j.com/labs/genai-ecosystem/llm-graph-builder/
Apache License 2.0
2.28k stars 363 forks source link

Neo4j-desktop local db with Docker compose #824

Open paolo-tamagnini-tr opened 6 days ago

paolo-tamagnini-tr commented 6 days ago

Hi there, I read here that

If want to use Neo4j Desktop instead, you need to configure your NEO4J_URI=bolt://host.docker.internal to allow the Docker container to access the network running on your computer.

is this up to date?

the readme states instead:

If you are using Neo4j Desktop, you will not be able to use the docker-compose but will have to follow the separate deployment of backend and frontend section.

so which is it?

I tried the first one but i cannot get it to connect

neo4j desktop:

Screenshot 2024-10-23 at 17 20 02

neo4j builder:

Screenshot 2024-10-23 at 17 20 42

the root .env i edited before the docker command

# Mandatory
OPENAI_API_KEY=""
DIFFBOT_API_KEY=""

# Optional Backend
EMBEDDING_MODEL="all-MiniLM-L6-v2"
IS_EMBEDDING="true"
KNN_MIN_SCORE="0.94"
# Enable Gemini (default is False) | Can be False or True
GEMINI_ENABLED=False
# LLM_MODEL_CONFIG_ollama_llama3="llama3,http://host.docker.internal:11434"

# Enable Google Cloud logs (default is False) | Can be False or True
GCP_LOG_METRICS_ENABLED=False
NUMBER_OF_CHUNKS_TO_COMBINE=6
UPDATE_GRAPH_CHUNKS_PROCESSED=20
NEO4J_URI="bolt://host.docker.internal"
NEO4J_USERNAME="neo4j"
NEO4J_PASSWORD="password"
LANGCHAIN_API_KEY=""
LANGCHAIN_PROJECT=""
LANGCHAIN_TRACING_V2="true"
LANGCHAIN_ENDPOINT="https://api.smith.langchain.com"
GCS_FILE_CACHE=False
ENTITY_EMBEDDING=True

# Optional Frontend
VITE_BACKEND_API_URL="http://localhost:8000"
VITE_BLOOM_URL="https://workspace-preview.neo4j.io/workspace/explore?connectURL={CONNECT_URL}&search=Show+me+a+graph&featureGenAISuggestions=true&featureGenAISuggestionsInternal=true"
VITE_REACT_APP_SOURCES="local"
VITE_LLM_MODELS="diffbot,openai-gpt-3.5,openai-gpt-4o" # ",ollama_llama3"
VITE_ENV="DEV"
VITE_TIME_PER_PAGE=50
VITE_CHUNK_SIZE=5242880
VITE_GOOGLE_CLIENT_ID=""
VITE_CHAT_MODES=""
VITE_BATCH_SIZE=2

LLM_MODEL_CONFIG_azure_ai_gpt_35="gpt-xxxxx,xxxx,xxxxxx,xxxxx"
LLM_MODEL_CONFIG_azure_ai_gpt_4o="gpt-xxxxx,xxxx,xxxxxx,xxxxx"
LLM_MODELS="azure_ai_gpt_35,azure_ai_gpt_4o"
kartikpersistent commented 5 days ago

Hi @paolo-tamagnini-tr have you mentioned the port on which neo4j was running in the NEO4j_URI or in frontend ? You have to mention bolt://host.docker.internal:PORTNUMBER

paolo-tamagnini-tr commented 1 day ago

thanks, that worked, not sure if i missed something in the docs or whether it was not mentioned

I have another issue now, i cannot select an azure model from the dropdown:

Screenshot 2024-10-28 at 12 10 17

those are the env_var i am trying at the moment

# Optional Frontend
VITE_BACKEND_API_URL="http://localhost:8000"
VITE_BLOOM_URL="https://workspace-preview.neo4j.io/workspace/explore?connectURL={CONNECT_URL}&search=Show+me+a+graph&featureGenAISuggestions=true&featureGenAISuggestionsInternal=true"
VITE_REACT_APP_SOURCES="local"
VITE_LLM_MODELS="azure_ai_gpt_35,azure_ai_gpt_4o" # ",ollama_llama3"
VITE_ENV="DEV"
VITE_TIME_PER_PAGE=50
VITE_CHUNK_SIZE=5242880
VITE_GOOGLE_CLIENT_ID=""
VITE_CHAT_MODES=""
VITE_BATCH_SIZE=2

LLM_MODEL_CONFIG_azure_ai_gpt_35="gpt-xxxxx,xxxx,xxxxxx,xxxxx"
LLM_MODEL_CONFIG_azure_ai_gpt_4o="gpt-xxxxx,xxxx,xxxxxx,xxxxx"
LLM_MODELS="azure_ai_gpt_35,azure_ai_gpt_4o"

can you spot anything that i am missing?

kartikpersistent commented 1 day ago

There is nothing wrong in the env We have changed the way of configuring the models you just need to put VITE_ENV="DEV" to try out all models please take the latest pull of main and refer to the readme for configuration

paolo-tamagnini-tr commented 1 day ago

the problem was not VITE_ENV but VITE_LLM_MODELS_PROD. works now, thanks

kartikpersistent commented 1 day ago

Glad That It worked we changed readme for better clarity