issues
search
TonyNemo
/
UBAR-MultiWOZ
AAAI 2021: "UBAR: Towards Fully End-to-End Task-Oriented Dialog System with GPT-2"
96
stars
25
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Unable to generate non-delexicalized response for DST module
#12
dk008652
opened
4 months ago
0
validation of train_dst.py file
#11
dk008652
closed
1 year ago
0
DST code
#10
aritraraut
opened
2 years ago
0
Where I can download the original DistilGPT2 parameter ?
#9
FlyingCat-fa
closed
2 years ago
1
Multiwoz evaluation
#8
unbiarirang
opened
3 years ago
1
The context used in evaluation
#7
311dada
opened
3 years ago
3
Is the model provided by the author the best model?
#6
newcolour1994
opened
3 years ago
0
Parameters for the best tuned model "experiments/all_0729_sd11_lr0.0001_bs2_ga16/epoch43_trloss0.56_gpt2"
#5
KristenZHANG
opened
3 years ago
0
How to reproduce the results on multiwoz2.0 reported in your paper using the provided checkpoint?
#4
lizekang
opened
3 years ago
6
Question about the end-to-end evaluation
#3
jimmy-red
opened
3 years ago
5
checkpoint
#2
libing125
opened
3 years ago
1
Evaluating on DST
#1
daisyworker
closed
3 years ago
2