issues
search
tonytan48
/
KD-DocRE
Implementation of Document-level Relation Extraction with Knowledge Distillation and Adaptive Focal Loss
110
stars
20
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
有没有正在跑代码的人可以互相交流一下
#25
QiyiJiang
opened
6 months ago
0
why can't i find the axial_attention from code"from axial_attention import AxialAttention". the code is incomplete whitout this ,
#24
Deargoodnight
opened
1 year ago
1
frequent/longtail relation
#23
JZBZ2020
opened
1 year ago
3
请问可以给一下你们训练环境的requirements.txt吗?
#22
dragondog129
closed
1 year ago
2
GPU resources
#21
Shike-Cheng
closed
1 year ago
0
batch_roberta.sh
#20
YjwHello
closed
1 year ago
2
terminated
#19
XingYu131
opened
1 year ago
2
Failed to train the model
#18
LeeReeny
opened
1 year ago
2
My reproduction results (with distant supervision) are lower than results in the paper.
#17
Jize-W
closed
1 year ago
1
Question about the focal loss
#16
RedVelvetCake21
closed
1 year ago
2
Sample code to run inference on any text
#15
AlexisNvn
opened
2 years ago
1
I'm curious that after downloading the file, the content is empty.
#14
WilliamAntoniocrayon
closed
2 years ago
2
Confused about not reproducing the results.
#13
msa30
closed
2 years ago
13
the program will always be killed.
#12
WilliamAntoniocrayon
closed
2 years ago
3
I got the problem about "ZeroDivisionError: float division by zero"
#11
WilliamAntoniocrayon
closed
2 years ago
2
A confusion happened in bash inference_logits_roberta
#10
cuberJ
closed
2 years ago
1
I'm confusing about KD-DocRE/scripts/
#9
WilliamAntoniocrayon
closed
2 years ago
4
An error occurred while performing the step 2
#8
WilliamAntoniocrayon
closed
2 years ago
2
knowledge_distill part script not found
#7
WatsonWangZh
closed
2 years ago
1
ModuleNotFoundError: No module named 'multihead_attention'
#6
moh-yani
closed
2 years ago
2
ModuleNotFoundError: No module named 'utils'
#5
moh-yani
closed
2 years ago
2
Adaptive Focal Loss
#4
Hou-jing
closed
2 years ago
1
I want to get the code in this paper
#3
WilliamAntoniocrayon
closed
2 years ago
1
Why not compare with SSAN-KD-Rb-l models that use knowledge distillation strategy?
#2
WatsonWangZh
closed
2 years ago
1
Would you tell me when will you upload the code?
#1
1120161807
closed
2 years ago
1