Closed kurophali closed 1 month ago
I finetuned a model with some additional embeddings in shape (42, 768) for the first encoder
I'm not quite sure what you mean by this. Did you load an existing embedding with token count 42 into an additional embedding with count 1? In that case... just don't do that.
I finetuned a model with some additional embeddings in shape (42, 768) for the first encoder
I'm not quite sure what you mean by this. Did you load an existing embedding with token count 42 into an additional embedding with count 1? In that case... just don't do that.
Yes that's what I tried. I'm now reshaping my own files to match the train ui token count.
What happened?
Not sure if it's a bug or not. But I finetuned a model with some additional embeddings in shape (42, 768) for the first encoder. Then in the trainer's ui I specified 1 as the token count. The saved embeddings are in (42, 768) instead of (1, 768). Is it by designed?
What did you expect would happen?
Saved embeddings should be in (1, 768)
Relevant log output
No response
Output of
pip freeze
No response