issues
search
amscotti
/
local-LLM-with-RAG
Running local Language Language Models (LLM) to perform Retrieval-Augmented Generation (RAG)
MIT License
172
stars
28
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Get the context data
#12
carinnanunest
opened
2 months ago
1
.
#11
vishesh99
closed
4 months ago
0
I asked a question in Chinese and the answer was English
#10
weirdo2310
closed
5 months ago
5
Add Streamlit UI
#9
amscotti
closed
5 months ago
0
Hi, just curious about the responses from various models
#8
LittleMonster104
closed
5 months ago
7
PDF info not use by the LLM
#7
Braindamage010063
closed
7 months ago
3
Error when running app.py
#6
An0nym0us30
closed
8 months ago
3
ValueError: Error raised by inference API HTTP code: 500, {"error":"error loading model
#5
holytony
closed
8 months ago
2
Using Ollama for embeddings
#4
amscotti
closed
8 months ago
0
[discussion] Nice project
#3
XenocodeRCE
closed
8 months ago
5
Using smaller models for faster speed
#2
amscotti
closed
8 months ago
0
loading documents in to Chroma: KILLED
#1
alanesq
closed
12 months ago
4