Open kerkkoh opened 3 months ago
Hi, @kerkkoh. I'm Dosu, and I'm helping the LangChain team manage their backlog. I'm marking this issue as stale.
Issue Summary
generate()
function from langchain_together
is not returning generation_info
and llm_output
.logprobs
and usage
data from the Together.ai LLM completions endpoint.Next Steps
Thank you for your understanding and contribution!
Hi Dosu,
Thanks for reaching out! I’m happy to say that this issue is still a thing with the latest version of LangChain. Let me know if you need anything else or have any questions!
Cheers, @kerkkoh
@eyurtsev, the user @kerkkoh has confirmed that the issue with the generate()
function from langchain_together
is still relevant in the latest version of LangChain. Could you please assist them with this?
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
No response
Description
I'm trying to use
langchain_together
library'sTogether
for calling the Together.ai LLM completions endpoint and expecting to get a LLMResult withlogprobs
inside ofgeneration_info
&usage
inllm_output
.Instead the following lacking LLMResult is given as output:
where
generation_info
is None, andllm_output
is None.This should be fixed by updating the
langchain_together.Together()
class so that it also has the necessary functions to return LLMResults withgeneration_info
&llm_output
defined when the response includes fields that are to be put in them. The expected output is:This could technically be avoided by using the
langchain_openai
library'slangchain_openai.OpenAI
, but the generate method of this class is no longer compatible with the old OpenAI Completions -style API that Together.ai uses. Mainly the underlyingOpenAIBase._generate
method calls the underlying OpenAI completions client with alist[str]
of prompts, which Together.ai doesn't support.Just in case someone finds this issue looking for a fix, I have a workaround for the workaround. The problem with the
langchain_openai
workaround can be bodged by overriding theopenai.client.completions.create
method after initializing the LLM class, using thetogether
python library's equivalent method, and removing incompatible arguments to create, which the API doesn't support. The following is a quick example for doing this:System Info
langchain==0.2.14 langchain-core==0.2.32 langchain-openai==0.1.21 langchain-text-splitters==0.2.2 langchain-together==0.1.5
mac (Macbook Pro M1 16GB, 2021), macOS Sonoma 14.5 (23F79)
Python 3.9.19