issues
search
keldenl
/
gpt-llama.cpp
A llama.cpp drop-in replacement for OpenAI's GPT endpoints, allowing GPT-powered apps to run off local llama.cpp models instead of OpenAI.
MIT License
594
stars
67
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
How to create a single binary
#65
arita37
opened
12 months ago
0
node:events:491 throw er; // Unhandled 'error' event Error: spawn YOUR_KEY=../llama.cpp/main ENOENT
#64
Jaykef
closed
1 year ago
0
Module not found: Package path ./lite/tiktoken_bg.wasm?module is not exported from package
#63
Jaykef
closed
1 year ago
1
llama.cpp unresponsive for 20 seconds
#62
JasonS05
opened
1 year ago
3
gguf supported?
#61
hiqsociety
opened
1 year ago
1
Change listening ip to public ip?
#60
Dougie777
closed
1 year ago
1
"Internal Server Error" on a remote server
#58
brinrbc
opened
1 year ago
0
Finding last messages?
#57
msj121
opened
1 year ago
0
Every Other Chat Response
#56
msj121
opened
1 year ago
1
Update WizardLM.js
#55
msj121
opened
1 year ago
0
Why is a default chat being forced?
#54
msj121
opened
1 year ago
0
Bearer Token vs Model parameter?
#53
msj121
opened
1 year ago
0
Cannot POST /V1/embeddings
#52
Terramoto
opened
1 year ago
1
SERVER BUSY, REQUEST QUEUED
#51
CyberRide
opened
1 year ago
0
Error: spawn ..\llama.cpp\main ENOENT at ChildProcess._handle.onexit
#50
lzbeefnoodle
closed
1 year ago
1
no response message with Readable Stream: CLOSED
#49
lzbeefnoodle
closed
1 year ago
2
Are there different specific instructions for running Red Pajama?
#48
Bloob-beep
opened
1 year ago
0
"Add GPU layer offload option. defaults.js"
#47
jnchman
closed
1 year ago
1
llama.cpp GPU support
#46
alexl83
opened
1 year ago
1
Slow speed Vicuna - 7B Help plz
#45
C0deXG
opened
1 year ago
3
npm error on gpt-llama.cpp
#44
C0deXG
opened
1 year ago
4
TypeError: Window.fetch: HEAD or GET Request cannot have a body.
#43
gooseillo
closed
1 year ago
1
ERR_MODULE_NOT_FOUND
#42
ZERO-A-ONE
closed
1 year ago
3
stuck
#41
C0deXG
opened
1 year ago
2
could we have git tags?
#40
jpetrucciani
opened
1 year ago
1
Fix various issues related to OpenAI API spec
#39
eiriklv
closed
1 year ago
0
weird headers error in chatcompletion mode
#38
OracleToes
opened
1 year ago
1
run with llama_index
#36
shengkaixuan
opened
1 year ago
2
Running in instruct mode and model file in a different directory
#35
regstuff
opened
1 year ago
5
Duplication of capabilities?
#34
das-sein
closed
1 year ago
1
issue with chatbot-ui
#33
gsgoldma
closed
1 year ago
0
Unable to run test-installation.sh in ubuntu
#32
BenjiKCF
closed
1 year ago
5
Add dockerfile
#31
yarray
opened
1 year ago
2
Add support for ChatGPT-Discord-Bot
#30
keldenl
opened
1 year ago
1
trouble generating a response
#28
gsgoldma
closed
1 year ago
2
Using GPT-Llama as the api for SGPT returns a JSON error
#27
idontneedonetho
opened
1 year ago
1
embeddingsRoutes fix, usage counts added to dataToEmbeddingResponse
#26
cryptocake
closed
1 year ago
0
stderr output to console, OpenAI quirks
#25
swg
closed
1 year ago
0
following instructions, get this error
#24
gsgoldma
closed
1 year ago
0
Add support for AgentGPT
#23
alexl83
opened
1 year ago
1
Add "--mlock" for M1 mac, on routes/chatRoutes.js
#22
m0chael
closed
1 year ago
1
Spelling
#21
adampaigge
closed
1 year ago
0
Issue: Why does (windows cmd) env variable setting work for some but not others?
#20
keldenl
closed
1 year ago
2
How to change the port from 443 to 8000? I am trying to run the setup on my Linux server.
#19
satcit-me
closed
1 year ago
5
Cannot GET /
#18
intulint
closed
1 year ago
3
had up to 7 --reverse-prompt in the prompt. This is a fix for that
#17
th-neu
closed
1 year ago
2
SSL option, .env file settings and custom llama path
#16
cryptocake
closed
1 year ago
1
Windows Batch and Powershell Test Installation files added
#15
th-neu
closed
1 year ago
3
[ERR_MODULE_NOT_FOUND]: Cannot find module node_modules/fs/promises
#14
B1gM8c
closed
1 year ago
1
Add support for LlamaAcademy
#13
Senior-S
opened
1 year ago
0
Next