Closed valentinegb closed 1 year ago
I don't think using an API key from the environment is a problem, but requiring the environment variable to be set to compile your project is annoying.
I think passing a model string is perfectly fine, and Models enum is unnecessary - the api will complain if an invalid string was passed and that can be dealt with ergonomically.
To be completely honest, it was a feature I worked on just because I didn't have experience with macros before and I thought it was cool- but it's causing more headaches than it's preventing, so you're right: it's got to go.
I got the idea to do something similar to generating ModelID
with a procedural macro, except generating just about everything using the OpenAI API schema, which would not need an API key. So I'll be experimenting with that, probably in another branch. This would be somewhat of a rewrite, so this issue would be resolved in that branch.
I looked into your branch and I think it is a step in a good direction but perhaps it could be even better?
I consider it an unnecessary complexity that some structs are generated in the build time based on some network service response. It basically means that different people can think they use the same version of the library but the code they have can be different.
My proposal is to define a cargo xtask, which you could run between releases, that would do what the proc macro does now.
Also, instead of the custom macro, you could use open-ai open-api and open-api-generator?
I think I could try playing with these if you were interested.
I think you’re mistaken on your first point, the openai_proc_macros
package doesn’t even have reqwest
as a dependency, like the main openai
package does. It uses the openapi.yaml
file, which was downloaded manually by me from the OpenAI OpenAPI repository you linked.
It is my intention to continue to manually update this file between versions, so that certain versions of the library can be expected to match certain versions of the API, which like you said should prevent related confusion.
Ah, yes, you are entirely correct! It seems I messed up two branches of the library in my head. Now I see that you even added a comment that explains why aren't you using open-api lib.
Still, what do you think about using open-api generator?
Sorry for the delay in response- I read more about it and, yeah, I think it's worth giving the OpenAPI Generator, that OpenAI actually seems to advocate for in their OpenAPI repository, a try. Looks like I'll be making another branch haha-
If it ends up being just as hard or harder than what I'm working on in generate-from-schema
, or if the end result isn't how I want it to be, then I'll go back to working on the generate-from-schema
branch. Otherwise, thanks for the suggestion!
Looks like there are changes being worked on here, but I am having some issues getting this package to compile and it seems semi-relevant. As far as I can tell, I'm following the samples pretty much exactly.
I have this as my Cargo.toml
:
[package]
name = "mindflex_server"
version = "0.1.0"
edition = "2021"
publish = false
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
tokio = { version = "1.26.0", features = ["full"] }
openai = "1.0.0-alpha.6"
dotenvy = "0.15.6"
and a .env
file in my project root (OPENAI_KEY=my_key). My folder structure looks like this:
.
├── src
│ └── main.rs
├── target
│ └── ...
├── .gitignore
├── Cargo.lock
├── Cargo.toml
└── .env
The main.rs file is the same as your completions sample. The only differences I can grok is that it isn't locally referencing the package locally since I'm not in your workspace. The env file can be accessed properly at runtime (if I remove openai
from the project) but not via the openai macro, thus triggering the panic:
help: message: environment variable `OPENAI_KEY` should be defined: NotPresent
I assume that this is a Rust beginner's mistake of some sort...if so, I apologize 😅. I'm on Windows (not Linux subsystem), if that matters. Any help would be appreciated!
Hey @ryanbliss, so the good news is that you aren't doing anything wrong, but the bad news is that it seems there was a pretty major oversight on my part. It looks like the library is looking for an .env
file at its own root directory when it should be looking for it at the root of your dependent project. I find it hard to believe I managed to go so long without realizing this- so if anyone can confirm that it's not working for them too, that would be great.
I think I have a solution in the generate-from-schema
branch, I'll have to copy it over to the main
branch. Shouldn't take longer than a day for me to do.
Thanks for looking into it so quickly @valentinegb! I briefly looked over the generate-from-schema
branch, which looks great. Setting the key at runtime is a lot more flexible. I'll keep an eye out for this change and then give it another go! 🥳
Should all be done in v1.0.0-alpha.7 :+1:
Awesome thank you! It's working great now 🥳
Problem
Sometimes, you might not want to or be able to use an environment variable to store the API key. For example, in a Shuttle project, Shuttle Secrets are used instead of environment variables. However, the
openai
library currently requires you use an environment variable, regardless.Solution
I'm pretty sure anyone will only want to use one API key per project, so it's not necessary to allow for specifying which key to use every time executing an API function/method. (Ex. having a
key
parameter or being a method to a structure that is initialized with a key, we don't want that.)I think the best way to do this would be to have a public
set_key()
function that will set a private variables than can only be read within the library. Functions and methods within the library can then reference this variable when they need the API key.There's still one problem though and it's that, currently, the key is needed during compile-time to generate the
ModelID
enumerator. I don't yet have a solution to this. If anyone has any ideas, definitely do tell me here, please.