Closed Invertisment closed 1 year ago
And this would mean that the integration of https://github.com/zmedelis/bosquet/issues/16 would be left to the user. And then you could simply provide a namespace that implements that API. This would be way more usable.
If you want speed of development you can still have babashka or separate source directory that would actually load scripts from files (a second main function that would take the keys and so on). i.e. I think that this speedy development setup must be different from the core part of the library.
Closing this but it will be revisited as part of work on #42
I'm thinking about using this project but it contains quite a bit of bloat that I don't know if I need. I already call ChatGPT's API using different libraries where I pass secrets and I can have more than one secret at a time.
You already have tests that mock the APIs of the GPT model here: https://github.com/zmedelis/bosquet/blob/main/test/bosquet/generator_test.clj#L36. This test uses
with-redefs
to change the model into anything but this means that if I'd do it then all CLI parsing and additional libraries would be a wasted pull from repositories. Maybe it would even fail the startup if there is no key.I'd like to know whether you'll want to support user's models by overriding that callback and without loading API keys from
config.edn
. For instance if user would be able to override thecomplete
function without loading OpenAI's keys then they could support a wide variety of models that they choose (e.g. locally run custom models).Do you think you'll want to do it with this library?
This could mean that the
system
would need to be loaded by the user and passed around to the function calls as it would contain the reference tocomplete
function.