protectai / rebuff

LLM Prompt Injection Detector
https://playground.rebuff.ai
Apache License 2.0
1.06k stars 73 forks source link

Explicitly set limits for prompt length #31

Open QUANGLEA opened 1 year ago

QUANGLEA commented 1 year ago

Hi, I'm getting this error 502 Server Error: Bad Gateway for url: https://alpha.rebuff.ai/api/detect when I use detect_injection() with long user inputs. A couple of sentences is fine, but a longer DAN prompt doesn't work. Appreciate any help I could get with this, thanks!

woop commented 1 year ago

Hey @QUANGLEA, thanks for raising!

cis-muzahid commented 1 year ago

I am also getting same issue, is there any way something like calculate the tokens or string size and pass into chunks like tiktoken or something? Thanks.

woop commented 1 year ago

@cis-muzahid what would the idea API look like for you if we were to break up the prompt into chunks? That seems like something the caller should do, not the PI library. What do you think?

seanpmorgan commented 11 months ago

Hi @QUANGLEA and @cis-muzahid. Thanks for reporting! I've edited the issue title so that Rebuff will explicitly set what the maximum prompt length is for this endpoint. After we've achieved that we can discuss if we should support more optimal handling or if that's on the caller to do.

Do you have a reproducible example we can use to debug?