masci / banks

LLM prompt language based on Jinja. Banks provides tools and functions to build prompts text and chat messages from generic blueprints. It allows attaching metadata to prompts to ease their management, and versioning is first-class citizen. Banks provides ways to store prompts on disk along with their metadata.
MIT License
63 stars 5 forks source link

Help understanding some concepts #25

Open TuanaCelik opened 3 days ago

TuanaCelik commented 3 days ago

Hey @masci - I'm trying to understand some concepts here. I'm a bit confused about which features are pure prompt/chat message creation, vs which ones are eventually doing a call to an LLM under the hood. Can you help me out here and maybe show me where in the codebase I can start looking?

Finally, with the tool calling within the prompt, does it result in only a prompt with the result of the tool call embedded?

masci commented 7 hours ago
* chat: this only creates Chat Messages, yes? No actual call to an LLM

Correct, you get a list of objects that can be used in LLM calls

* Completion: I'm confused with this one.

This makes an actual LLM call while rendering the prompt. The final, "rendered" text (or chat message) form of the prompt will contain text that comes from an LLM.

* generate: there is an eventual call to an LLM

Correct, but this was deprecated in favor of completion so you should use that one.

  And am I correct in saying that so far the LLMs that is supported is OpenAI models and LiteLLM?

Banks supports LiteLLM only, but this means supporting all these providers!

Finally, with the tool calling within the prompt, does it result in only a prompt with the result of the tool call embedded?

The tool call will generate some text and this text can be used inside the prompt while rendering. It's the same concept of the completion tag.