issues
search
nerdslab-club
/
cl_model
All models required for curious learner
GNU Affero General Public License v3.0
1
stars
0
forks
source link
Use combined embeddings instead of random nn.embeddings, tokenizer and vocubulary builder.
#9
Closed
afmjoaa
closed
11 months ago
afmjoaa
commented
1 year ago
[x] Add proper inference method without batch_size and with a response as text not as a token.
[x] Add get batch embeddings map function in embeddings manager
[x] Add combined embeddings map functions in embeddings manager
[x] Implement the batch builder class
[x] Implement vocab builder(Token with Category map) with object equality.
[x] Implement a basic response parser(Only print both token and category_map)
[x] Use combined embeddings in the encoder and fix the tests.
[x] Use combined embeddings in the decoder and fix the tests.
[x] Use combined embeddings in the transformer and fix the tests.
[x] Use combined embeddings, batch builder, vocab builder, and response parser in training and fix the tests.
[x] Use combined embeddings, batch builder, vocab builder, and response parser in inference and fix the tests.
[x] Remove trasformer_utils, Tokenizer, vocabulary builder, and nn.embeddings from model architecture.
[x] Refactor ALiBiBI embeddings calculation in the MHA using the embeddings manager.
NOT IMPORTANT RIGHT NOW
afmjoaa
commented
11 months ago
Resolved.