issues
search
g-simmons
/
289G_NLP_project_FQ2020
0
stars
1
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
fix testing
#61
g-simmons
closed
3 years ago
0
added test_epoch_end to get layer-wise loss.
#60
fangzhouli
opened
3 years ago
0
fix BERT
#59
g-simmons
closed
3 years ago
0
Use multiple averaging strategies in val and test to calculate metrics
#58
g-simmons
closed
3 years ago
1
added early stoppoing
#57
fangzhouli
opened
3 years ago
0
add nll_loss_weight as a variable parameter
#56
g-simmons
closed
3 years ago
0
Refactor bert
#55
g-simmons
closed
3 years ago
0
Parametrize training script
#54
g-simmons
closed
3 years ago
0
reduce word embedding dim back to 256
#53
g-simmons
closed
3 years ago
0
Bert stopping
#52
msarmadsaeed
closed
3 years ago
2
added a new random baseline; randomness is based on the true label's …
#51
sajia28
closed
3 years ago
0
debugged token splitter
#50
fangzhouli
closed
3 years ago
0
test on the test split after completing training
#49
g-simmons
closed
3 years ago
0
Tmp
#48
fangzhouli
closed
3 years ago
0
updated logging
#47
g-simmons
closed
3 years ago
0
updated logging
#46
g-simmons
closed
3 years ago
0
map steps to epochs, log to wandb
#45
g-simmons
closed
3 years ago
1
Fixes to BERT
#44
g-simmons
closed
3 years ago
0
[HIGH] log precision and recall to wandb
#43
g-simmons
closed
3 years ago
0
[MED] Reimplement prediction masking based on previous predictions
#42
g-simmons
opened
3 years ago
0
[LOW] Error propagation analysis - implement Guided training
#41
g-simmons
opened
3 years ago
0
[LOW] Distribution of (actual_positives / number of candidates) per sample
#40
g-simmons
opened
3 years ago
0
Updated logging to wandb
#39
g-simmons
closed
3 years ago
0
[HIGH] all-negative F1 baseline undef, what is a reasonable baseline?
#38
g-simmons
opened
3 years ago
0
[HIGH] Fix validation naive accuracy
#37
g-simmons
closed
3 years ago
0
[LOW] Improvements to model checkpointing
#36
g-simmons
opened
3 years ago
0
[LOW] Stop BERT fine-tuning after a couple epochs
#35
g-simmons
closed
3 years ago
1
[HIGHEST] Check/fix BERT encodings so they align with entity_spans
#34
g-simmons
closed
3 years ago
1
[HIGH] Evaluate performance metrics on heldout set after training is complete and store to wandb
#33
g-simmons
closed
3 years ago
1
Evaluate same metrics as INN paper at validation step
#32
g-simmons
closed
3 years ago
0
Save model files to wandb
#31
g-simmons
closed
3 years ago
0
efficient training for DAG-LSTM
#30
g-simmons
closed
3 years ago
0
BERT incorporated
#29
g-simmons
closed
3 years ago
0
weights & biases logging
#28
g-simmons
closed
3 years ago
1
Fang gpu refined
#27
g-simmons
closed
3 years ago
0
log to wandb
#26
g-simmons
closed
3 years ago
0
Sammy gpu tensorboard
#25
sajia28
closed
3 years ago
0
Parallelize data processing, update training script
#24
g-simmons
closed
3 years ago
0
Revert "Revert "updated data format, model training""
#23
sajia28
closed
3 years ago
0
Revert "updated data format, model training"
#22
sajia28
closed
3 years ago
0
Sammy batch learning
#21
sajia28
closed
3 years ago
0
Run Code and Check Accuracy/Error
#20
sajia28
closed
3 years ago
0
Parallelize Code
#19
sajia28
closed
3 years ago
0
Move Code to Google Cloud
#18
sajia28
closed
3 years ago
0
DAG-LSTM cell state exploding in a single forward pass, clipping may be limiting gradient flow?
#17
g-simmons
closed
3 years ago
1
updated data format, model training
#16
g-simmons
closed
3 years ago
0
Create a Plot of the Loss Over Time
#15
sajia28
closed
3 years ago
0
[DRAFT] model training
#14
g-simmons
closed
3 years ago
0
Perform a Code Review
#13
sajia28
closed
3 years ago
0
Put the Training Examples into Batches
#12
sajia28
closed
3 years ago
0
Next