issues
search
ExpressAI
/
omneval
Prompting Evaluation for Pretrained Language Models
1
stars
1
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Suggestions & New Features
#19
pfliu-nlp
opened
2 years ago
2
can not run the example
#18
pfliu-nlp
closed
2 years ago
1
Leaderboard Design
#17
pfliu-nlp
opened
2 years ago
0
add chunking & pos-tagging tasks
#16
vincent1rookie
closed
2 years ago
0
discussions
#15
pfliu-nlp
opened
2 years ago
0
add more evaluation perspectives in terms of models
#14
pfliu-nlp
opened
3 years ago
2
finish the outline of the overleaf
#13
pfliu-nlp
opened
3 years ago
0
benchmark dataset
#12
pfliu-nlp
opened
3 years ago
1
re-inplemented results
#11
pfliu-nlp
opened
3 years ago
1
evaluation metric for generation tasks
#10
pfliu-nlp
opened
3 years ago
1
prompting evaluation QA-Squad?
#9
pfliu-nlp
opened
3 years ago
2
this should be larger, which affect final performance
#8
pfliu-nlp
opened
3 years ago
1
add explaination of these input arguments
#7
pfliu-nlp
opened
3 years ago
0
add detailed comments
#6
pfliu-nlp
opened
3 years ago
0
add more comments (or even examples) for each task's prompting function
#5
pfliu-nlp
opened
3 years ago
0
support conll v.s brat format?
#4
pfliu-nlp
opened
3 years ago
0
not a span-based prompt?
#3
pfliu-nlp
opened
3 years ago
0
Can we draw a digram of our code base's framework?
#2
pfliu-nlp
opened
3 years ago
1
Can we add other sequence labeling tasks, like part-of-speech, chunking, cws, which would make our work stronger?
#1
pfliu-nlp
opened
3 years ago
0