whylabs / langkit

🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring safety & security. 🛡️ Features include text quality, relevance metrics, & sentiment analysis. 📊 A comprehensive tool for LLM observability. 👀
https://whylabs.ai
Apache License 2.0
859 stars 68 forks source link

Use custom prompt in response_hallucination #263

Open pradeepdev-1995 opened 8 months ago

pradeepdev-1995 commented 8 months ago

How can I use my own domain specific prompt for response hallucination detection function call in langkit?

FelipeAdachi commented 8 months ago

Can you elaborate on your question a bit more? What is exactly that you're trying to do?

pradeepdev-1995 commented 8 months ago

@FelipeAdachi In my understanding, the hallucination detection works by prompting the LLM with a default prompt inside the langkit library. My question is shall we use our own custom prompt for this, rather than go with the default prompt?

FelipeAdachi commented 8 months ago

The hallucination detection will call the LLM in two distinct phases:

1) To generate the additional samples, based on the prompt passed 2) To perform the consistency check between the answer and generated samples

I'm assuming you want to pass your own custom prompt for the second item, correct? If that's the case, no, we currently don't support it.

Right now, the code expects an output between [Accurate, Minor Inaccurate, Major Inaccurate], and assigns scores for each of these 3 values. To support a custom prompt, we'd need to require the same output, or maybe some sort of mapping of categories to values.

Can you share more details on why do you need a custom prompt for your use case?