🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring safety & security. 🛡️ Features include text quality, relevance metrics, & sentiment analysis. 📊 A comprehensive tool for LLM observability. 👀
After this pr, models are lazily initialized when the first prediction occurs. That leads to the cached artifacts being retrieved for every prediction, which adds latency. Also, this means that the latency for downloading models is added in the first prediction, rather than the module initialization, which could be done at building time.
We should more extensively test to assess how much latency it adds, and think of ways to trigger the initialization outside of an actual request.
After this pr, models are lazily initialized when the first prediction occurs. That leads to the cached artifacts being retrieved for every prediction, which adds latency. Also, this means that the latency for downloading models is added in the first prediction, rather than the module initialization, which could be done at building time.
We should more extensively test to assess how much latency it adds, and think of ways to trigger the initialization outside of an actual request.