issues
search
tensorchord
/
modelz-llm
OpenAI compatible API for LLMs and embeddings (LLaMA, Vicuna, ChatGLM and many others)
https://modelz.ai
Apache License 2.0
265
stars
26
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Can't bypass OpenAI urls
#104
mahavatara
opened
5 months ago
3
Local custom .gguf modals supported?
#102
sengiv
closed
1 year ago
2
Tokenizer class LLaMATokenizer does not exist or is not currently imported.
#101
ken-xu-e
opened
1 year ago
1
chore: add license file
#100
kemingy
closed
1 year ago
0
Missing a LICENSE
#99
loleg
closed
1 year ago
1
Cant use it on Windows
#98
yogeshhk
closed
1 year ago
3
bug: Failed to generate outputs
#97
gaocegege
opened
1 year ago
2
Function calling feature
#96
willswordh
opened
1 year ago
2
chore: fix llama-2 image name
#95
kemingy
closed
1 year ago
0
fix ci cancellation, ignore chatglm int4 download
#94
kemingy
closed
1 year ago
0
feat: support llama2
#93
kemingy
closed
1 year ago
0
fix: bloomz cpu req
#92
kemingy
closed
1 year ago
0
fix: bloomz dockerfile env path
#91
kemingy
closed
1 year ago
0
add llama-2
#90
antonkulaga
closed
1 year ago
1
feat: use cpu for bloomz-560m
#89
kemingy
closed
1 year ago
0
do we support vicuna 13b, chatglm2 ?
#88
timiil
opened
1 year ago
1
feat: provide instructions on how community members can wrap models for this project
#87
PaulConyngham
opened
1 year ago
1
bug: can only concatenate str (not "int") to str
#86
gaocegege
closed
1 year ago
1
fix: chat completion response schema
#85
kemingy
closed
1 year ago
0
feat: Add moderation API
#84
gaocegege
opened
1 year ago
0
bug: Completion request returns wrong response
#83
gaocegege
closed
1 year ago
0
chore: Fix vicuna 7b
#82
gaocegege
opened
1 year ago
0
feat: detect the subprocess exitcde
#81
kemingy
closed
1 year ago
0
feat: impl a mini mosec to work with model in subprocess
#80
kemingy
closed
1 year ago
0
feat: support chatgpt web
#79
dgqyushen
opened
1 year ago
1
chore: Update
#78
gaocegege
closed
1 year ago
0
feat: Support falcon 7b
#77
gaocegege
opened
1 year ago
0
chore: Fix svg
#76
gaocegege
closed
1 year ago
0
chore: Update readme
#75
gaocegege
closed
1 year ago
0
feat: disable access log
#74
kemingy
closed
1 year ago
0
bug: Vicuna performance is not great
#73
gaocegege
closed
1 year ago
2
chore: Lock protobuf version
#72
gaocegege
closed
1 year ago
0
chore: Add proto
#71
gaocegege
closed
1 year ago
0
chore: Use latest transformers
#70
gaocegege
closed
1 year ago
0
bug: Unexpected OOM in ChatGLM 6B
#69
gaocegege
closed
1 year ago
2
chore: Add bloomz
#68
gaocegege
closed
1 year ago
0
chore: Dry run embedding
#67
gaocegege
closed
1 year ago
0
chore: Add cmd
#66
gaocegege
closed
1 year ago
0
chore: Release
#65
gaocegege
closed
1 year ago
0
fix: chatglm needs to convert the layer
#64
kemingy
closed
1 year ago
0
bug: AttributeError: 'ChatGLMForConditionalGeneration' object has no attribute 'encoder'
#63
arugal
closed
1 year ago
4
bug: RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' in chatglm int4
#62
gaocegege
closed
1 year ago
2
chore: Fix glm
#61
gaocegege
closed
1 year ago
0
bug: RuntimeError: Only Tensors of floating point and complex dtype can require gradients
#60
gaocegege
closed
1 year ago
0
chore: Add dockerfile
#59
gaocegege
closed
1 year ago
0
feat: Remove docker
#58
gaocegege
closed
1 year ago
0
chore: Add maximize build space
#57
gaocegege
closed
1 year ago
0
chore: Enbale dry run
#56
gaocegege
closed
1 year ago
0
bug: Install cuda again in the image
#55
gaocegege
opened
1 year ago
2
feat: Use huggingface hub to dry run
#54
gaocegege
closed
1 year ago
0
Next