Closed creatorrr closed 1 year ago
@creatorrr is this something you are currently using on LangChain, or is this just an observation that we don't support this. Looking for some context to help prioritize this.
We use LangChain in production but we use their async interface. I looked at PromptLayer but unfortunately we can't use it as is without async support.
That is fair. We can hopefully start working on this soon and keep you in the loop.
If you do have some time yourself it shouldn't be too difficult to make a PR. Basically we would have to define _agenerate
in our LangChain OpenAI LLM to call the OpenAI LLM's _agenerate
and then pass the data back to our API using promptlayer_api_request
util function.
@creatorrr What version of langchain are you using? I cant get the example on the link you shared to work.
@creatorrr we now support this with the most recent version of promptlayer (0.1.66) https://pypi.org/project/promptlayer/
See here