rustformers / llm

[Unmaintained, see README] An ecosystem of Rust libraries for working with large language models
https://docs.rs/llm/latest/llm/
Apache License 2.0
6.08k stars 362 forks source link

how should i build a web interface? #201

Closed ralyodio closed 1 year ago

ralyodio commented 1 year ago

Is there anyway to provide a history of prompts (like chat-gpt4) and tell the llm-cli to return json?

philpax commented 1 year ago

Your best bet is to build your own server based around llm; llm-cli is basically just a demo application for llm.

You can see how this might be done in the now-closed PR #37.

ralyodio commented 1 year ago

it looks like that PR was closed and llm-http isn't actually a thing yet. I'm not good with rust, but I'm good with node. I can call out to llm-cli but was just curious if there's another way.

Do you know how I could track a history of prompts with llm-cli?

ducaale commented 1 year ago

Have you tried using https://github.com/Atome-FE/llama-node? I think it has bindings to this library

This is a nodejs library for inferencing llama, rwkv or llama derived models. It was built on top of llama-rs, llama.cpp and rwkv.cpp. It uses napi-rs for channel messages between node.js and llama thread.

(this repo used to be called llama-rs until recently)

danforbes commented 1 year ago

@ralyodio I'm going to close this Issue, but please feel free to reopen it or open a new one if you have more questions.