-
- Choose an encoder model to process the input into an embedding
- Choose a decoder model to process the embedding into the output
-
### 🚀 The feature, motivation and pitch
Takes 1 hour+ on CI compared to others, which take
-
Currently, encoder-decoder models lack support for Grad-CAM (Gradient-weighted Class Activation Mapping) visualization with cross-attention mechanisms. Grad-CAM is a valuable tool for interpreting mo…
-
#### What is your question?
README demo: ValueError: not enough values to unpack (expected 3, got 1)
#### Code
from funasr import AutoModel
chunk_size = [0, 10, 5] #[0, 10, 5] 600ms, [0, 8, 4…
-
I noticed that you use an encoder-decoder model T5 but not a decoder-only model as the source LLM due to "easily get each input doc‘s hidden_states separately".
If I use a decoder-only model, get eac…
-
Can you enable extensions for the libraries with the option `--use_extensions`
This way we'll be able to use models optimization such as with https://gist.github.com/thewh1teagle/2ad98796179dfdde4680…
-
if self.is_encoder_decoder:
input_ids = input_kwargs["decoder_input_ids"]
attention_mask = input_kwargs["decoder_attention_mask"]
else:
…
-
Hello! I'm trying to load a pre-trained model but I got a lot of missing keys:
-
I think there is a chance I'm hitting this "Currently, adding one more variable here causes the model to incorrectly load the static variables. It is possible to hack around this. We are working on a …
-
Hi,
Can I use the API's to convert and run my whisper encoder and decoder models separately? If yes, how do I do this?