deadbits / vigil-llm

⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
https://vigil.deadbits.ai/
Apache License 2.0
304 stars 35 forks source link

Relevance via LiteLLM? #26

Open krrishdholakia opened 1 year ago

krrishdholakia commented 1 year ago

Hi @deadbits,

thanks for using litellm! Curious how you're thinking of using it in this case?

deadbits commented 1 year ago

Hey 👋 thanks for reaching out. It isn’t fully implemented right now (I expect to finish it this week), but the plan is to use LiteLLM to make calls to a user defined LLM. LiteLLM seemed like the best approach to let users pick which model they want to use without me needing to write wrappers for each.

There’s the start of the LiteLLM code here, pretty minimal so far https://github.com/deadbits/vigil-llm/blob/main/vigil/llm.py Which gets used here https://github.com/deadbits/vigil-llm/blob/main/vigil/scanners/relevance.py

The issue I’m trying to resolve now is how to best get structured output out of LLMs in general, and how I can add that in with LiteLLM (Guidance looked good but I don’t think I can use it with LiteLLM? I could be mistaken)

krrishdholakia commented 1 year ago

We have an open PR awaiting approval - https://github.com/guidance-ai/guidance/pull/347

If you could add a comment there, i think that'd help

krrishdholakia commented 1 year ago

Do you need litellm if you use guidance, i think they support a bunch of different providers already.

Any specific ones you're trying to use us for?

deadbits commented 1 year ago

Awesome, I'll keep an eye on that Guidance PR. I don't need LiteLLM if I use Guidance, but I was hoping I could use Guidance + LiteLLM to get the structured output from Guidance and LLM model availability from LiteLLM. Unfortunately that doesn't seem to be the case yet!

I'm going to stick with LiteLLM, probably just use Guidance for some test runs to see what their prompts look like and apply them manually via LiteLLM 😄

krrishdholakia commented 1 year ago

@deadbits trying to improve litellm. Can you chat for ~10 minutes this week?

Want to understand how you found the integration experience, and what we could improve on.

Also DM'ed on Linkedin if that helps.