issues
search
abgulati
/
LARS
An application for running LLMs locally on your device, with your documents, facilitating detailed citations in generated responses.
https://www.youtube.com/watch?v=Mam1i86n8sU&ab_channel=AbheekGulati
GNU Affero General Public License v3.0
449
stars
30
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Pytorch memory usage balloons with each subsequent inference or query
#23
synchronic1
opened
3 days ago
2
Typo in requirements.txt
#22
synchronic1
closed
6 days ago
9
There was an error when loading the LLM in the method
#21
manub14
closed
1 day ago
45
Failed to start llama.cpp local-server persists after saving settings
#19
yvileapsis
closed
2 weeks ago
4
Feature Requests: Use llamafile / OpenAI compatible / API?
#17
quantumalchemy
closed
1 month ago
1
Docker build fails
#16
ruze00
closed
1 month ago
6
Fails on M2 Mac
#15
ruze00
closed
1 month ago
3
[Feature Request]: Change LARS behaviour to reset document context on New Chat
#12
ShaswatPanda
closed
2 months ago
1
idea: embeddings should be generated using llama.cpp
#9
daboe01
closed
2 months ago
3
requirements.txt is UTF-16LE encoding
#8
jabberjabberjabber
closed
2 months ago
1
Error with CUDA Dockerfile i.e dockerized_nvidia_cuda_gpu/Dockerfile
#7
AjayVarmaK
closed
2 months ago
7
Only Loading pdfs
#6
Hisma
closed
2 months ago
5
Fedora 39: requirements installation fails on pywin32
#5
emulated24
closed
2 months ago
6
Converts txt files to pdf and back to text again
#4
Pomyk
closed
2 months ago
1
Hybrid search support
#2
Jhyrachy
closed
2 months ago
1
Issue to build llama.cpp with CMAKE
#1
diegomontania
closed
2 months ago
15