issues
search
iejMac
/
encoder-distill
Align embedding spaces of PyTorch encoders with common input types.
MIT License
4
stars
0
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Alignment might be easier with word sense disambiguated and/or averaged vectors
#19
Thomas-MMJ
opened
2 years ago
0
Similarity Loss: Softer guidance via similarities instead of features
#18
iejMac
closed
2 years ago
0
Research Priority Queue
#17
iejMac
opened
2 years ago
0
Hard guidance lower in the network and soft guidance higher
#16
iejMac
opened
2 years ago
0
Add MLP to teacher instead of student
#15
iejMac
opened
2 years ago
1
Try to use similarity as a loss
#14
rom1504
opened
2 years ago
3
Try to do distill for more GPU hours
#13
rom1504
opened
2 years ago
0
Try to ensemble 2 clip by targeting their sum
#12
rom1504
opened
2 years ago
0
Normalize features before MSE Loss
#11
iejMac
closed
2 years ago
1
generalize MLPEncoder stuff
#10
iejMac
opened
2 years ago
0
combine separately trained encoders into CLIP model
#9
iejMac
closed
2 years ago
0
Batch size scaling
#8
iejMac
opened
2 years ago
3
try normalizing before MSE
#7
iejMac
opened
2 years ago
0
control steps and warmup using params
#6
iejMac
closed
2 years ago
0
write function that combines image and text back into CLIP
#5
iejMac
closed
2 years ago
0
organize scripts + split training for each encoder
#4
iejMac
closed
2 years ago
1
clean and package up
#3
iejMac
opened
2 years ago
0
align internal layers of model as well as final layer
#2
iejMac
opened
2 years ago
0
make script train encoders separately
#1
iejMac
closed
2 years ago
0