issues
search
NJUNLP
/
ReNeLLM
The official implementation of our NAACL 2024 paper "A Wolf in Sheep’s Clothing: Generalized Nested Jailbreak Prompts can Fool Large Language Models Easily".
MIT License
72
stars
11
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Generating the final jailbreak prompt without openai API key and anthropic API key.
#5
NCU-MC
opened
2 days ago
1
天啊,这写的也太棒了吧
#4
lyb3b
closed
1 week ago
1
Support for Ollama
#3
tanmaymittal
closed
4 months ago
1
detail usage
#2
Isaac-theori
closed
7 months ago
2
The detail about the attention visualization.
#1
XitaoLi
closed
4 months ago
10