chawins / llm-sp

Papers and resources related to the security and privacy of LLMs 🤖
https://chawins.github.io/llm-sp
Apache License 2.0
384 stars 30 forks source link

Kindly request the inclusion #5

Closed xirui-li closed 6 months ago

xirui-li commented 6 months ago

I'm reaching out to share a recent paper I've co-authored titled "DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers". Our research focuses on jailbreaking LLM by prompt decomposition, and I believe it aligns well with your interest in LLM safety.

You can access the paper here. Our project page and twitter message are also available for your reference.

Thank you so much for considering my request. I'm also open to any questions or discussions this might spark – I'd love to engage in a meaningful conversation with someone of your expertise.

Best regards, Xirui Li

chawins commented 6 months ago

Thank you for point me to this paper! I will surely have a look. I will add it in the list now, but please let me know if you would like to provide your own summary of the paper.