issues
search
Princeton-SysML
/
Jailbreak_LLM
153
stars
14
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Chat templates are not used
#14
Rachneet
closed
2 months ago
1
potential issue in get_sentence_embedding()
#13
terarachang
opened
5 months ago
0
For llama2, use_default config seems to have top-k values 50 because of the default value of hugging face config, but did llama2 use top-k?
#12
alongflow
opened
6 months ago
1
Did system prompts are used for GCG and Generation Exploitation methods respectively in experimental results reported in the paper?
#11
alongflow
opened
6 months ago
1
What is the difference between top_k=1 and greedy decoding, and why should we experiment separately?
#10
zggg1p
opened
8 months ago
0
The output of the mpt-30b-chat contains a large number of irrelevant characters
#9
zggg1p
opened
9 months ago
0
Missing Chat Template
#8
justinphan3110cais
opened
9 months ago
2
Issues while Running FlanT5-small model with JailBreak
#7
aneekroy
opened
11 months ago
0
fix typo in `evaluate.py`
#6
chujiezheng
closed
11 months ago
0
Another Question about Default Transformer Decoding Config
#5
chujiezheng
opened
12 months ago
1
About Input Formats for Different Models
#4
chujiezheng
opened
12 months ago
3
(Maybe) Bugs in the code.
#3
Junjie-Chu
opened
12 months ago
1
Performance of the evaluator
#2
fabrahman
closed
11 months ago
1
Fix typo in README.md
#1
eltociear
closed
1 year ago
0