langfuse / langfuse

🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with LlamaIndex, Langchain, OpenAI SDK, LiteLLM, and more. 🍊YC W23
https://langfuse.com/docs
Other
6.37k stars 616 forks source link

feat(langchain): extract value as string form prompttemplates #1214

Open Stadly opened 8 months ago

Stadly commented 8 months ago

Describe the bug

PromptTemplate's output should just be a string. It is currently an object: image

HumanMessagePromptTemplate (and the AI and System equivalents) should be objects with role and content. They are currently visualized like this: image

The RunnableMap's output should get the same visualizations as described above, instead of the LangChain objects: image

To reproduce

const llm = new ChatOpenAI({
  azureOpenAIApiKey: AZURE_OPENAI_API_KEY,
  azureOpenAIApiInstanceName: AZURE_OPENAI_API_INSTANCE_NAME,
  azureOpenAIApiDeploymentName: AZURE_OPENAI_API_DEPLOYMENT_NAME,
  azureOpenAIApiVersion: AZURE_OPENAI_API_VERSION,
  modelName: "gpt-4",
  verbose: true,
})

const handler = new CallbackHandler({
  secretKey: LANGFUSE_SECRET_KEY,
  publicKey: LANGFUSE_PUBLIC_KEY,
  baseUrl: LANGFUSE_BASE_URL,
});

const map = RunnableMap.from({
  "String": PromptTemplate.fromTemplate("Tell me a joke about {topic}."),
  "Human": HumanMessagePromptTemplate.fromTemplate("Tell me a joke about {topic}."),
  "AI": AIMessagePromptTemplate.fromTemplate("Tell me a joke about {topic}."),
  "System": SystemMessagePromptTemplate.fromTemplate("Tell me a joke about {topic}."),
});
await map.invoke({topic: "bears"}, { callbacks: [handler] });

Additional information

I'm using the JavaScript SDK, version 3.1.0

marcklingen commented 8 months ago

thanks for the suggestion. For prompt templates specifically this could be nicer. Open for contributions on this!