Open tomsbergmanis opened 3 years ago
Even a code example in a comment below would be really good!
Hi Toms, I've never used it and don't know what it actually does. Pinging @emjotde, who may know. It might be that it was an experimental feature.
@tomsbergmanis Did you ever figure out how to use these?
(I have need of a cpu-decoding classifier for Norwegian text and marian seems promising cf. https://groups.google.com/g/marian-nmt/c/iVq-jGa3N8M but I don't know if it's possible to use it that way)
Nop. Still no idea.
On Thu, May 16, 2024 at 11:06 AM Kevin Brubeck Unhammer < @.***> wrote:
@tomsbergmanis https://github.com/tomsbergmanis Did you ever figure out how to use these?
(I have need of a cpu-decoding classifier for Norwegian text and marian seems promising cf. https://groups.google.com/g/marian-nmt/c/iVq-jGa3N8M but I don't know if it's possible to use it that way)
— Reply to this email directly, view it on GitHub https://github.com/marian-nmt/marian-dev/issues/885#issuecomment-2114368167, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAZHLNNB4EEZYBXPXA2ILR3ZCRSI7AVCNFSM5GLIYO22U5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TEMJRGQZTMOBRGY3Q . You are receiving this because you were mentioned.Message ID: @.***>
Marian command line options include:
--bert-mask-symbol TEXT=[MASK] Masking symbol for BERT masked-LM training
--bert-sep-symbol TEXT=[SEP] Sentence separator symbol for BERT next sentence prediction training
--bert-class-symbol TEXT=[CLS] Class symbol BERT classifier training
--bert-masking-fraction FLOAT=0.15 Fraction of masked out tokens during training
--bert-train-type-embeddings=true Train bert type embeddings, set to false to use static sinusoidal embeddings
--bert-type-vocab-size INT=2 Size of BERT type vocab (sentence A and B)
Yet I can not find example of how and for what these can be used. It would be good to have simple use case example on how to use these to do BERT pretraining.