issues
search
amaiya
/
onprem
A tool for running on-premises large language models with non-public data
https://amaiya.github.io/onprem
Apache License 2.0
679
stars
32
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Document Ingester.ingest() better,
#72
dwvisser
closed
3 weeks ago
1
Removed deprecated Chromadb.persist() call
#71
dwvisser
closed
1 month ago
2
Segment needs to accept arguments in extractor pipeline
#70
mzientek
closed
2 months ago
1
Add clean function to Extractor.apply
#69
amaiya
closed
2 months ago
0
Remove call to persist
#68
amaiya
closed
2 months ago
0
Remove BOS token from default prompt
#67
amaiya
closed
2 months ago
0
Few-Shot classification pipeline
#66
amaiya
closed
2 months ago
0
change default model to Mistral
#65
amaiya
closed
2 months ago
0
information extraction pipeline
#64
amaiya
closed
2 months ago
0
experimental support for Azure OpenAI
#63
amaiya
closed
2 months ago
0
allow installation of onprem without llama-cpp-python for easier use with LLMs served through REST APIs
#62
amaiya
closed
2 months ago
0
Use OnPrem.LLM with OpenAI-compatible REST APIs
#61
amaiya
closed
2 months ago
0
Issue warning instead of halting if encountering an error when loading files during ingest
#60
amaiya
closed
2 months ago
0
Add `Ingester.get_ingested_files` to show files ingested in vector databae
#59
amaiya
closed
2 months ago
0
Add ignore_fn argument to LLM.ingest
#58
amaiya
closed
2 months ago
0
Curious
#57
agentsimon
closed
5 months ago
2
ssl issue to get embedding model files for local usage of onprem
#56
ringo70
closed
6 months ago
1
support OpenAI models
#55
amaiya
closed
6 months ago
0
Accept extra **kwargs in prompt and pass to model
#54
amaiya
closed
6 months ago
0
Add `stop` parameter to `LLM.prompt` method
#53
amaiya
closed
6 months ago
0
Use Zephyr-7B as default model in Web app
#52
amaiya
closed
6 months ago
0
Add prompt_template argument to LLM
#51
amaiya
closed
7 months ago
1
offload_kqv not properly set
#50
amaiya
closed
7 months ago
1
add check for partially download files
#49
amaiya
closed
2 months ago
0
CPU limits
#48
lystrata
closed
8 months ago
1
add prompt template from YAML to ask in Web app
#47
amaiya
closed
8 months ago
0
add progress bar for ingest
#46
amaiya
closed
8 months ago
0
have ingest skip ~$ files created by Windows
#45
amaiya
closed
8 months ago
0
add python-pptx as dependency
#44
amaiya
closed
8 months ago
0
"No module named 'docx' error
#43
amaiya
closed
8 months ago
0
WSL 2 and Docker Instructions
#42
dwvisser
closed
3 months ago
19
Core dumped / segmentation fault
#41
lysa324
closed
8 months ago
2
Support for Mistral 7B Model
#40
rabilrbl
closed
9 months ago
2
Support for LlamaIndex
#39
nawagner
closed
8 months ago
2
Number of tokens
#38
lysa324
closed
9 months ago
3
ValidationError: 1 validation error for LlamaCpp
#37
Jacob-Langley
closed
10 months ago
3
support for custom metadata in vectorstore
#36
amaiya
opened
10 months ago
0
add pipeline module
#35
amaiya
closed
7 months ago
0
add guardrails module
#34
amaiya
closed
8 months ago
0
remove pin for `llama-cpp-python` so latest is used
#33
amaiya
closed
10 months ago
0
include `prompt_template` in YAML for Web app
#32
amaiya
closed
10 months ago
0
Change `LLM.ask` to return dictionary with keys: `answer`, `source_documents`, and `question`
#31
amaiya
closed
10 months ago
0
load llm in constructor
#30
amaiya
closed
10 months ago
0
round scores in web app
#29
amaiya
closed
10 months ago
0
include hyperlinks to sources
#28
amaiya
closed
10 months ago
0
GGUF support
#27
amaiya
closed
10 months ago
0
add support for `score_threshold` in `LLM.ask` and `LLM.chat`
#26
amaiya
closed
10 months ago
0
Talk to Your Documents with Retrieval-Augmented Generation not working
#25
Velcin
closed
10 months ago
3
use `CallbackManager`
#24
amaiya
closed
10 months ago
0
return `source_documents` used to generate answer in `LLM.chat`
#23
amaiya
closed
10 months ago
0
Next