zmedelis / bosquet

Tooling to build LLM applications: prompt templating and composition, agents, LLM memory, and other instruments for builders of AI applications.
https://zmedelis.github.io/bosquet/
Eclipse Public License 1.0
280 stars 19 forks source link

Flex conf #15

Closed behrica closed 1 year ago

behrica commented 1 year ago

In this PR I "moved" the configuration of the model "up" to the function "gen/copmplete-template" and "complete". (so it merges the opts from inside the prompt definition and the external opts), external overrides This has in my view 2 advantages:

I fixed as well the Azure OpenAI issue. (#9 ) Know the code works again with OpenAI and Azure OpenAI, provided that the correct model options are given to "complete-template" (or a specified inside the template)

behrica commented 1 year ago

I was as well thinking about the limitation of this approach. In practice the new "model opts" parameter gets merged / applied for the "llm-generate" tag This is maybe problematic ones we have several "tags" which call into models. Or we want that different "llm-generate" tags in a complex prompt definition use different models (and model configuration) I can see that template approach you have put in place, allows to setup "trees of template definition", which have "tags" in various places. How can we find a way to "externally" configure the tags in the whole tree ? How to "identify" them in order to configure them ?

behrica commented 1 year ago

I am new to Selmer, so maybe there is already all we need in Selmer itself. But I think the idea of being able to specify in some form model parameters in the prompt definition which get merged with an external passed in model config map is important for reaching this 2 goals:

behrica commented 1 year ago

I am wondering if and how we should be able to configure externally both calls to completion see here:

image

differently (so using a different model, different key and so on) without specifying this inside the prompt definition. -> the user of the template can decide if he want to use the same or a different model at each completion call.

behrica commented 1 year ago

Probably a convention to "configure" nested llm-generate tags in the form of

(gen/complete
    template
    {:title "Mr. X" :genre "crime"}
    {"review.llm-generate"
      {:impl :openai
        :api-key "sk-xxxxx"}
    "synopsis.llm-generate"
      {:impl :azure
      :api-key "azure key"}}

so "prompt-map-key.tag-name" to inject options into the llm-generate call at a certain place in the template (and merge with ev. existing options) could work.

This is in some way what I would like to achieve.. use different models (and their configuration) at different places of a prompt template, without need to configure this inside the prompt template.

behrica commented 1 year ago

Maybe we want as well address the case of the same tag multiple times in a template, by allow to index:

(gen/complete
    template
    {:title "Mr. X" :genre "crime"}
    {"review.llm-generate.1"
      {:impl :openai
        :api-key "sk-xxxxx"}
    "review.llm-generate.2"
      {:impl :azure
      :api-key "azure key"}}
behrica commented 1 year ago

I am know quite happy with the PR. I adapted the notebook use_guide.clj, but did not check if other places (notebooks or tests) need changes as well.