🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring safety & security. 🛡️ Features include text quality, relevance metrics, & sentiment analysis. 📊 A comprehensive tool for LLM observability. 👀
Add onnx versions for toxicity and topic metrics
This assumes a local folder with the onnx model artifacts:
toxic-comment-model-onnx
xtremedistil-l6-h256-zeroshot-v1.1-all-33-onnx
(default model set to the quantized version)The use is exactly the same, but importing
topic_onnx
andtoxicity_onnx
instead of the original metrics.