rendezqueue / rendezllama

CLI for llama.cpp with various commands to guide, edit, and regenerate tokens on the fly.
ISC License
10 stars 1 forks source link

feat: Local HTTP Server #19

Open grencez opened 1 year ago

grencez commented 1 year ago

A local HTTP server would be a nice way to dockerize and prototype an interface. My rough design would be:

grencez commented 1 year ago

Since I plan to use WebRTC for the remote server, it would make sense to pull in the https://github.com/paullouisageneau/libdatachannel/ library now and use its websocket functionality for the local server. It also pulls in a JSON library, which could be convenient.

Not sure what to do about the webpage serving though. I can roll a simple http server, but it would have to hand off to the websocket (on the same port) I guess? Unclear.

grencez commented 1 year ago

In the long term, it's probably best for the local server behave like the remote server and use WebRTC. So in the short term, I once again prefer the original idea of a dumb HTTP-only interface that responds to JavaScript polling.

Code layout can be:

grencez commented 1 year ago

Alternatively... It would be even easier to use a simple python or nodejs process that spawns the chat executable and serves some html & javascript content. Might as well start there...

The coprocess funcitonality was added in https://github.com/rendezqueue/rendezllama/issues/22.

grencez commented 1 year ago

I only know basic web stuff, so anyone is free to pick this up. See updated description.