Open Theodotus1243 opened 1 year ago
In transformers score called perplexity(PPL) Here is the guide for to calculate PPL from AutoModelForCausalLM output
Am I right that I can do it by my own by just initializing BeamSearchDecoderCTC with custom AbstractLanguageModel instead of using build_ctcdecoder?
Hey @Theodotus1243! Here's a guide on how you can integrate a pyctcdecode
n-gram beam scorer with a Wav2Vec2CTC model from 🤗 Transformers: https://huggingface.co/blog/wav2vec2-with-ngram
Hi, want to say that You implemented a great package
It's block structure lets to assume that except of LanguageModel it can support many others
For example adding support for AutoModelForCausalLM will extend it to numbers of awailable models from huggingface
From GPT2, OPT to BERT and XGLM