corca-ai / awesome-llm-security

A curation of awesome tools, documents and projects about LLM Security.
930 stars 89 forks source link

Hi! want to add a paper, thanks #2

Closed qiuhuachuan closed 1 year ago

qiuhuachuan commented 1 year ago

Paper Title: Latent Jailbreak: A Benchmark for Evaluating Text Safety and Output Robustness of Large Language Models

L0Z1K commented 1 year ago

Oh! I saw this paper yesterday. Updated! Thanks for contribution.