issues
search
liltom-eth
/
llama2-webui
Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). Use `llama2-wrapper` as your local llama2 backend for Generative Agents/Apps.
MIT License
1.97k
stars
202
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Change .env after using pip to install
#88
NytePlus
opened
5 months ago
0
Supports accepting network requests, listening on specific ports and running GPTQ models on multiple GPUs
#87
Arondight
opened
8 months ago
0
Update README.md
#86
RISHIKREDDYL
closed
9 months ago
1
Gradio Memory Leak Issue
#85
ruizcrp
opened
10 months ago
0
why i7 8700 is faster than i7 9700
#84
AndreaChiChengdu
opened
11 months ago
0
Very slow generation
#83
jaslatendresse
opened
11 months ago
1
GPU CUDA not found And HFValidationError
#82
HorrorBest
opened
11 months ago
0
safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge
#80
HougeLangley
closed
11 months ago
5
multi gpu, llama2-70b
#78
fo40225
opened
12 months ago
0
dom.js:238 Uncaught (in promise) DOMException
#77
kisseternity
opened
1 year ago
0
[FEATURE] added prompts templates.
#76
BobCN2017
closed
1 year ago
2
How to add llama_index in llama-webui
#75
Kashif-Inam
opened
1 year ago
0
The temperature parameter does not seem to work
#74
ibutenko
opened
1 year ago
2
[FEATURE] add support for GGUF models
#73
liltom-eth
closed
1 year ago
0
GGML deprecated - support GGUF models?
#72
agilebean
closed
1 year ago
3
model is not None
#71
quanpinjie
closed
1 year ago
8
AssertionError self.model is not None
#70
ebdavison
closed
1 year ago
6
chat too slow!
#69
Hyingerrr
closed
1 year ago
1
How to run on GPU? Runs on CPU only
#68
oaefou
closed
1 year ago
1
ERROR. How to fix ?
#67
oaefou
closed
1 year ago
4
Unable to load 70B llama2 on cpu (llama cpp)
#66
Dougie777
opened
1 year ago
1
[DOCUMENT] update readme
#65
liltom-eth
closed
1 year ago
0
[DOCUMENT] update readme
#64
liltom-eth
closed
1 year ago
0
[FEATURE] Add code llama, add code completion
#63
liltom-eth
closed
1 year ago
0
[Feature Request] Support InternLM
#62
vansinhu
opened
1 year ago
0
[FEATURE] Add model downloading script
#61
liltom-eth
closed
1 year ago
0
[FEATURE] update app.py with args, test codellama
#60
liltom-eth
closed
1 year ago
0
OSError: [Errno 30] Read-only file system
#59
realAbitbol
opened
1 year ago
1
[BUG] bug resolved on fast api
#58
liltom-eth
closed
1 year ago
0
[BUILD] Create release.yml for PYPI
#57
liltom-eth
closed
1 year ago
0
[FEATURE] add OpenAI compatible API
#56
liltom-eth
closed
1 year ago
0
[BUILD] Create branch.yml for CI
#55
liltom-eth
closed
1 year ago
0
Ignores new query and responds with crossed out details (from previous question).
#54
THREELabs
closed
1 year ago
2
cannot run Llama-2-70b-hf
#53
takitsuba
closed
1 year ago
3
[FEATURE] add streaming for call
#52
liltom-eth
closed
1 year ago
0
Create read1
#51
nephp06
closed
1 year ago
0
Cant seem to run it on GPU
#50
rishabh-gurbani
closed
1 year ago
5
[FEATURE] add args for benchmark
#49
liltom-eth
closed
1 year ago
0
When I was running app. py, I encountered some errors
#48
Nerva05251228
closed
1 year ago
2
[FEATURE] support for ctransformers
#47
touchtop
closed
1 year ago
1
[BUILD] update poetry
#46
liltom-eth
closed
1 year ago
0
[FEATURE] Update env
#45
liltom-eth
closed
1 year ago
0
[FEATURE] update default model path
#44
liltom-eth
closed
1 year ago
0
[BUILD] update poetry
#43
liltom-eth
closed
1 year ago
0
[FEATURE] Update requirements
#42
liltom-eth
closed
1 year ago
0
[FEATURE] add download model for default
#41
liltom-eth
closed
1 year ago
0
[DOCUMENT] update readme, news, performance
#40
liltom-eth
closed
1 year ago
0
[DOCUMENT] update reademe, poetry for llama2-wrapper
#39
liltom-eth
closed
1 year ago
0
[FEATURE] Llama2 wrapper unify arguments, change initial method
#38
liltom-eth
closed
1 year ago
0
ERROR: Failed building wheel for llama-cpp-python
#37
qinshuaibo
opened
1 year ago
2
Next