whylabs / langkit

🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring safety & security. 🛡️ Features include text quality, relevance metrics, & sentiment analysis. 📊 A comprehensive tool for LLM observability. 👀
https://whylabs.ai
Apache License 2.0
839 stars 67 forks source link

reduce latency in lazy initialization #278

Open FelipeAdachi opened 7 months ago

FelipeAdachi commented 7 months ago

After this pr, models are lazily initialized when the first prediction occurs. That leads to the cached artifacts being retrieved for every prediction, which adds latency. Also, this means that the latency for downloading models is added in the first prediction, rather than the module initialization, which could be done at building time.

We should more extensively test to assess how much latency it adds, and think of ways to trigger the initialization outside of an actual request.