issues
search
mrseanryan
/
gpt-docs-chat
Chat with local LLM about your PDF and text documents, privacy ensured [llamaindex and llama3]
MIT License
1
stars
0
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Add Summary indexing to the redis version
#9
mrseanryan
opened
6 months ago
1
Add redis as vector store
#8
mrseanryan
closed
6 months ago
0
See paper-qa for inspiration
#7
mrseanryan
opened
6 months ago
0
Try using Redis as store (with incremental indexing)
#6
mrseanryan
closed
6 months ago
1
Support *incremental* indexing
#5
mrseanryan
opened
6 months ago
1
for performance - try a simpler mode (via config.py) which is less powerful but faster
#4
mrseanryan
opened
6 months ago
0
for performance: try host llm on ec2 - docs and indexes stay local
#3
mrseanryan
opened
6 months ago
1
check ollama usage (WSL)
#2
mrseanryan
closed
6 months ago
1
Try directly on windows (ollama preview) instead of WSL
#1
mrseanryan
closed
6 months ago
1