issues
search
mingkaid
/
rl-prompt
Accompanying repo for the RLPrompt paper
MIT License
284
stars
52
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Repeating tokens in optimized prompt
#45
AMJasser
opened
3 weeks ago
0
Add __init__.py to make rlprompt importable
#44
AMJasser
opened
1 month ago
0
Can we use this as a layer between user and RAG model?
#43
deepakdev1507
opened
5 months ago
1
seems have a bug in evaluate function
#42
A11en0
opened
7 months ago
1
Questions on the Gradients of LLM
#41
Schwartz-Zha
opened
7 months ago
0
Baselines results
#40
Davido111200
opened
8 months ago
0
Why does this method need so much steps?
#39
A11en0
closed
8 months ago
2
Reproducibility and randomness
#38
YasamanJafari
closed
8 months ago
1
BrokenPipeError: [Errno 32] Broken pipe
#37
Xinhui-Zhu
closed
8 months ago
2
question: Is the model used in the policy network always distilgpt?
#36
pascalhuszar
closed
8 months ago
3
About the prepended special character \u0120.
#35
guozix
closed
10 months ago
3
About the RL training
#34
FayeXXX
closed
10 months ago
3
Train using vertexai
#33
yguezpa
closed
10 months ago
1
Network is unreachable
#32
rabi-fei
closed
10 months ago
1
question
#31
18712234451
closed
11 months ago
2
CUDA Out of Memory training on shakespeare dataset
#30
DwyaneLQY
closed
12 months ago
2
A question about ppl score
#29
FayeXXX
closed
1 year ago
7
Clarification on the RL problem
#28
hv68
closed
12 months ago
1
classification with gpt & training time
#27
MatthewCYM
closed
1 year ago
2
classifcation with gpt
#26
MatthewCYM
closed
1 year ago
2
RL-prompt MLP loss
#25
hv68
closed
1 year ago
1
Scope of this project
#24
YujingYang666777
closed
1 year ago
2
An error about Hydra when running examples/few-shot-classification
#23
JiaxiLi001
closed
1 year ago
7
A question about prompt initialization
#22
beeevita
closed
1 year ago
3
ImportError
#21
beeevita
closed
1 year ago
2
Add outputs
#20
mingkaid
closed
1 year ago
0
Add TST outputs
#19
mingkaid
closed
1 year ago
0
output data of your experiment
#18
li-jing-wen
closed
1 year ago
4
question about using greedy search during inference
#17
lihenglin
closed
1 year ago
1
Fix errors with irregular batch sizes during inferences
#16
mingkaid
closed
1 year ago
0
assert len(prompt_strs) == len(source_strs) fails during inference
#15
mahdiabdollahpour
closed
1 year ago
4
CVE-2007-4559 Patch
#14
TrellixVulnTeam
closed
1 year ago
0
Transferring Prompts across LMs
#13
52ie
closed
1 year ago
3
A question about how to judge the performance of the prompts after running "run_fsc.py" file?
#12
jasonyin718
closed
1 year ago
3
Cls update
#11
MM-IR
closed
1 year ago
0
some Doubts about a symbol
#10
oujieww
closed
1 year ago
2
Could RLPROMPT be applied to zero-shot settings?
#9
shadowkiller33
closed
1 year ago
5
RuntimeError
#8
Ericmututu
closed
1 year ago
2
Revamp code structure
#7
mingkaid
closed
1 year ago
0
Lack of evaluation in `modular-interface` branch
#6
Dinxin
closed
1 year ago
4
Could not find the exact place where the training set is input?
#5
Dinxin
closed
1 year ago
4
The data in 'data/prompt-gpt2-vocab' folder seems meaningless?
#4
Dinxin
closed
1 year ago
1
generate_prompts.py may be only usable for `input_specific=True` option?
#3
Dinxin
closed
1 year ago
1
Test cls
#2
MM-IR
closed
1 year ago
0
Add clean research code
#1
mingkaid
closed
1 year ago
0