Closed InfraK closed 1 month ago
If these runnable examples are meant to replace the ones that were in the documentation, I'd prefer to have each example be its own file. As it stands, I think this PR improves testability of the code examples, but degrades the discoverability and readability of the code examples. I'd like to find a way to solve for both.
Additionally, as it stands currently, if I want to just run an example, I end up running all of the examples, and I also need to have credentials for every possible adapter that Kurt has.
I can see why you wanted to keep things DRY and avoid the expansion of generation examples multiplied by the number of adapter examples. However, I'd like to propose a different approach to solving that problem. Rather than having the top-level file be a consolidated place where all examples are run, I think it would be best to do have each example separate, but use a consolidated function for Kurt setup. Here's my concrete proposal:
createKurt
function (defined in a separate ./createKurt.ts
file) to return a Kurt
instancecreateKurt
function looks at environment variables to determine which adapter and which model to use, and it has some basic validation on those env vars (verifying that they are present and have an appropriate value). Here's a basic proposal of env vars:
KURT_ADAPTER
(allowed values: open-ai
, vertex-ai
)KURT_MODEL
(value validated by the isSupportedModel
static function of the selected adapter)This approach has a few benefits to mention:
createKurt
function will not just be for our own testing convenience, but also be a great example in its own right.
What do you think?
I mostly agree with everything, I'll take another pass at the comment tomorrow, but I think that we might want to use hard-coded values with multiple entry-points, one for each adapter, this at least gives more visibility into how the examples work, and we leave env variables strictly for credentials management, since the env files are not committed
I may just give you a brief call tomorrow so that we can align on an approach.
So, after putting some more thought into it, given that these are meant to be examples, keeping it DRY doesn't make much sense, the priority should be in simplicity.
I'm thinking at this moment to create the following examples:
The environment variables will be used across all the examples for credentials, each example should be able to run on its own, as well as documenting how to run them in the readme.
How does this sound?
@InfraK - That sounds good.
I think the one thing I'd suggest is to ensure that in the model-agnostic example, we should show all three generation methods (similar to the 3 examples in the current Kurt README).
In a followup PR I will make a script that runs as a prepackage hook to copy snippets from each example file into the markdown READMEs prior to NPM publish.
@jemc In order to keep the pull requests small, I'll split the examples here and leave the model-agnostic one for a follow up, does that sound good?
Sounds good!
:tada: This PR is included in version @formula-monks/kurt-v1.3.0 :tada:
The release is available on:
Your semantic-release bot :package::rocket:
:tada: This PR is included in version @formula-monks/kurt-open-ai-v1.5.0 :tada:
The release is available on:
Your semantic-release bot :package::rocket:
:tada: This PR is included in version @formula-monks/kurt-vertex-ai-v1.4.0 :tada:
The release is available on:
Your semantic-release bot :package::rocket:
create new example folder with a basic example project, pulling in the required dependencies, updated readme files to point to this new folder