issues
search
peleiden
/
daluke
A Danish-speaking language model with entity-aware self-attention
MIT License
9
stars
0
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Update plots
#84
asgerius
closed
3 years ago
4
NER return best (on dev)
#83
sorenmulli
closed
3 years ago
1
Add switch to bypass initialization using daBERT
#82
sorenmulli
closed
3 years ago
1
Entity vocabulary contains "Kategori"
#81
sorenmulli
closed
3 years ago
1
Plot parameter dist.
#80
sorenmulli
closed
3 years ago
0
Add masking params to hyperparams
#79
sorenmulli
closed
3 years ago
0
Representation geometry over seq. length
#78
sorenmulli
closed
3 years ago
0
Investigate correctness of data, masking, and accuracy in pretraining
#77
asgerius
closed
3 years ago
1
Representation geometry wo. entities
#76
sorenmulli
closed
3 years ago
1
Ablation on daBERT initialization
#75
asgerius
closed
3 years ago
0
Ablation on finetuning concatenation
#74
sorenmulli
closed
3 years ago
0
Add better file not found to load_from_archive
#73
sorenmulli
closed
3 years ago
0
Plot PCA, t-SNE og UMAP /w positive classes
#72
sorenmulli
closed
3 years ago
0
save-every should actually save every single epoch, but delete afterwards
#71
sorenmulli
closed
3 years ago
0
Consider our own additions to daLUKE - what about relational tokens?
#70
sorenmulli
closed
3 years ago
0
Cross-valdate NER
#69
sorenmulli
closed
3 years ago
0
Perform multiple NER trainings with multiple seeds to see variance in RNG
#68
sorenmulli
closed
3 years ago
0
Make NER deterministic
#67
sorenmulli
closed
3 years ago
0
Fine-tune a number of models saved during the training to see acc. during pretraining
#66
sorenmulli
closed
3 years ago
0
Clean up NER data-API
#65
sorenmulli
closed
3 years ago
0
Finalize PCA/t-SNE/UMAP
#64
sorenmulli
closed
3 years ago
1
Decide what experiments and what analysis of pretraining model should be finished for report
#63
sorenmulli
closed
3 years ago
7
NER: Show running F1 on train set
#62
sorenmulli
closed
3 years ago
0
Make entity augmentation experiment
#61
sorenmulli
closed
3 years ago
0
Make plots of performance improvement using multi-GPU for our daLKE code section
#60
sorenmulli
closed
3 years ago
3
Create goals and get overview for our deployment plans
#59
sorenmulli
closed
3 years ago
1
Loss weighting in pretraining
#58
asgerius
closed
3 years ago
0
Consider requirements
#57
sorenmulli
closed
3 years ago
1
Allow some hyperparameters to be overwritten when resuming
#56
asgerius
closed
3 years ago
7
Add more datasets to finetuning of daLUKE
#55
sorenmulli
closed
3 years ago
0
Add dacy to our NER reproduction
#54
sorenmulli
closed
3 years ago
0
Finetuning: Big boi hyperparameter seach
#53
sorenmulli
closed
3 years ago
10
Finetuning: Weighted loss?
#52
sorenmulli
closed
3 years ago
1
Finetuning should save jobs in subfolders
#51
sorenmulli
closed
3 years ago
0
Load daBERT in and analyse its' performance
#50
sorenmulli
closed
3 years ago
1
Check geometry of LUKE CWR's
#49
sorenmulli
closed
3 years ago
0
Make big K run every L sub-kai
#48
sorenmulli
closed
3 years ago
0
Continue "big boi"
#47
sorenmulli
closed
3 years ago
3
Make resume load hyperparams
#46
sorenmulli
closed
3 years ago
0
Fix off-by-one in resume training
#45
sorenmulli
closed
3 years ago
2
Debug NER
#44
sorenmulli
closed
3 years ago
0
Profiling produces incorrect measures
#43
asgerius
closed
3 years ago
0
Top k-præcision
#42
asgerius
closed
3 years ago
1
Gradientmål, der tager højde for, om det er få gradienter, der ændrer sig
#41
asgerius
closed
3 years ago
0
Choose mask percentage
#40
sorenmulli
closed
3 years ago
1
Choose learning rate during the training
#39
sorenmulli
closed
3 years ago
3
Explore the consequence of fixing/unfixing BERT training
#38
sorenmulli
closed
3 years ago
0
Choose weighting between word and entity loss
#37
sorenmulli
closed
3 years ago
1
Choose batch size
#36
sorenmulli
closed
3 years ago
2
Lower case training data as BotXO is trained on lower case
#35
sorenmulli
closed
3 years ago
0
Previous
Next